00:00:00.000 Started by upstream project "autotest-nightly" build number 4350 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3713 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.163 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.165 The recommended git tool is: git 00:00:00.166 using credential 00000000-0000-0000-0000-000000000002 00:00:00.167 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.199 Fetching changes from the remote Git repository 00:00:00.201 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.235 Using shallow fetch with depth 1 00:00:00.235 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.235 > git --version # timeout=10 00:00:00.257 > git --version # 'git version 2.39.2' 00:00:00.257 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.271 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.271 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:11.373 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:11.385 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:11.396 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:11.396 > git config core.sparsecheckout # timeout=10 00:00:11.406 > git read-tree -mu HEAD # timeout=10 00:00:11.421 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:11.441 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:11.441 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:11.578 [Pipeline] Start of Pipeline 00:00:11.594 [Pipeline] library 00:00:11.595 Loading library shm_lib@master 00:00:11.596 Library shm_lib@master is cached. Copying from home. 00:00:11.613 [Pipeline] node 00:00:11.626 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:11.627 [Pipeline] { 00:00:11.638 [Pipeline] catchError 00:00:11.639 [Pipeline] { 00:00:11.652 [Pipeline] wrap 00:00:11.661 [Pipeline] { 00:00:11.670 [Pipeline] stage 00:00:11.672 [Pipeline] { (Prologue) 00:00:11.689 [Pipeline] echo 00:00:11.691 Node: VM-host-SM9 00:00:11.697 [Pipeline] cleanWs 00:00:11.706 [WS-CLEANUP] Deleting project workspace... 00:00:11.706 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.712 [WS-CLEANUP] done 00:00:11.935 [Pipeline] setCustomBuildProperty 00:00:12.021 [Pipeline] httpRequest 00:00:12.440 [Pipeline] echo 00:00:12.441 Sorcerer 10.211.164.112 is alive 00:00:12.451 [Pipeline] retry 00:00:12.453 [Pipeline] { 00:00:12.468 [Pipeline] httpRequest 00:00:12.473 HttpMethod: GET 00:00:12.473 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.474 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.491 Response Code: HTTP/1.1 200 OK 00:00:12.492 Success: Status code 200 is in the accepted range: 200,404 00:00:12.492 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.113 [Pipeline] } 00:00:17.131 [Pipeline] // retry 00:00:17.140 [Pipeline] sh 00:00:17.422 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.438 [Pipeline] httpRequest 00:00:17.840 [Pipeline] echo 00:00:17.842 Sorcerer 10.211.164.112 is alive 00:00:17.852 [Pipeline] retry 00:00:17.853 [Pipeline] { 00:00:17.868 [Pipeline] httpRequest 00:00:17.872 HttpMethod: GET 00:00:17.873 URL: http://10.211.164.112/packages/spdk_52a4134875252629d5d87a15dc337c6bfe0b3746.tar.gz 00:00:17.874 Sending request to url: http://10.211.164.112/packages/spdk_52a4134875252629d5d87a15dc337c6bfe0b3746.tar.gz 00:00:17.917 Response Code: HTTP/1.1 200 OK 00:00:17.917 Success: Status code 200 is in the accepted range: 200,404 00:00:17.918 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_52a4134875252629d5d87a15dc337c6bfe0b3746.tar.gz 00:01:41.626 [Pipeline] } 00:01:41.643 [Pipeline] // retry 00:01:41.650 [Pipeline] sh 00:01:41.932 + tar --no-same-owner -xf spdk_52a4134875252629d5d87a15dc337c6bfe0b3746.tar.gz 00:01:45.230 [Pipeline] sh 00:01:45.510 + git -C spdk log --oneline -n5 00:01:45.510 52a413487 bdev: do not retry nomem I/Os during aborting them 00:01:45.510 d13942918 bdev: simplify bdev_reset_freeze_channel 00:01:45.510 0edc184ec accel/mlx5: Support mkey registration 00:01:45.510 06358c250 bdev/nvme: use poll_group's fd_group to register interrupts 00:01:45.510 1ae735a5d nvme: add poll_group interrupt callback 00:01:45.529 [Pipeline] writeFile 00:01:45.544 [Pipeline] sh 00:01:45.826 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:45.837 [Pipeline] sh 00:01:46.117 + cat autorun-spdk.conf 00:01:46.117 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.117 SPDK_TEST_NVMF=1 00:01:46.117 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.117 SPDK_TEST_URING=1 00:01:46.117 SPDK_TEST_VFIOUSER=1 00:01:46.117 SPDK_TEST_USDT=1 00:01:46.117 SPDK_RUN_ASAN=1 00:01:46.117 SPDK_RUN_UBSAN=1 00:01:46.117 NET_TYPE=virt 00:01:46.117 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.124 RUN_NIGHTLY=1 00:01:46.125 [Pipeline] } 00:01:46.139 [Pipeline] // stage 00:01:46.153 [Pipeline] stage 00:01:46.155 [Pipeline] { (Run VM) 00:01:46.167 [Pipeline] sh 00:01:46.482 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:46.482 + echo 'Start stage prepare_nvme.sh' 00:01:46.482 Start stage prepare_nvme.sh 00:01:46.482 + [[ -n 4 ]] 00:01:46.482 + disk_prefix=ex4 00:01:46.482 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:46.482 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:46.482 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:46.482 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.482 ++ SPDK_TEST_NVMF=1 00:01:46.482 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.482 ++ SPDK_TEST_URING=1 00:01:46.482 ++ SPDK_TEST_VFIOUSER=1 00:01:46.482 ++ SPDK_TEST_USDT=1 00:01:46.482 ++ SPDK_RUN_ASAN=1 00:01:46.482 ++ SPDK_RUN_UBSAN=1 00:01:46.482 ++ NET_TYPE=virt 00:01:46.482 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.482 ++ RUN_NIGHTLY=1 00:01:46.482 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:46.482 + nvme_files=() 00:01:46.482 + declare -A nvme_files 00:01:46.482 + backend_dir=/var/lib/libvirt/images/backends 00:01:46.482 + nvme_files['nvme.img']=5G 00:01:46.482 + nvme_files['nvme-cmb.img']=5G 00:01:46.482 + nvme_files['nvme-multi0.img']=4G 00:01:46.482 + nvme_files['nvme-multi1.img']=4G 00:01:46.482 + nvme_files['nvme-multi2.img']=4G 00:01:46.482 + nvme_files['nvme-openstack.img']=8G 00:01:46.482 + nvme_files['nvme-zns.img']=5G 00:01:46.482 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:46.482 + (( SPDK_TEST_FTL == 1 )) 00:01:46.482 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:46.482 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:46.482 + for nvme in "${!nvme_files[@]}" 00:01:46.482 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:46.482 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.482 + for nvme in "${!nvme_files[@]}" 00:01:46.482 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:46.482 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.482 + for nvme in "${!nvme_files[@]}" 00:01:46.482 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:46.482 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:46.482 + for nvme in "${!nvme_files[@]}" 00:01:46.482 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:46.482 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.482 + for nvme in "${!nvme_files[@]}" 00:01:46.482 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:46.482 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.482 + for nvme in "${!nvme_files[@]}" 00:01:46.482 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:46.482 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.482 + for nvme in "${!nvme_files[@]}" 00:01:46.483 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:46.741 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.741 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:46.741 + echo 'End stage prepare_nvme.sh' 00:01:46.741 End stage prepare_nvme.sh 00:01:46.752 [Pipeline] sh 00:01:47.034 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:47.034 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:01:47.034 00:01:47.034 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:47.034 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:47.034 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:47.034 HELP=0 00:01:47.034 DRY_RUN=0 00:01:47.034 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:47.034 NVME_DISKS_TYPE=nvme,nvme, 00:01:47.034 NVME_AUTO_CREATE=0 00:01:47.034 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:47.034 NVME_CMB=,, 00:01:47.034 NVME_PMR=,, 00:01:47.034 NVME_ZNS=,, 00:01:47.034 NVME_MS=,, 00:01:47.034 NVME_FDP=,, 00:01:47.034 SPDK_VAGRANT_DISTRO=fedora39 00:01:47.034 SPDK_VAGRANT_VMCPU=10 00:01:47.034 SPDK_VAGRANT_VMRAM=12288 00:01:47.034 SPDK_VAGRANT_PROVIDER=libvirt 00:01:47.034 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:47.034 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:47.034 SPDK_OPENSTACK_NETWORK=0 00:01:47.034 VAGRANT_PACKAGE_BOX=0 00:01:47.034 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:47.034 FORCE_DISTRO=true 00:01:47.034 VAGRANT_BOX_VERSION= 00:01:47.034 EXTRA_VAGRANTFILES= 00:01:47.034 NIC_MODEL=e1000 00:01:47.034 00:01:47.034 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:47.034 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:50.354 Bringing machine 'default' up with 'libvirt' provider... 00:01:50.354 ==> default: Creating image (snapshot of base box volume). 00:01:50.354 ==> default: Creating domain with the following settings... 00:01:50.354 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733828697_06045f1e7dcc620e67fc 00:01:50.354 ==> default: -- Domain type: kvm 00:01:50.354 ==> default: -- Cpus: 10 00:01:50.354 ==> default: -- Feature: acpi 00:01:50.354 ==> default: -- Feature: apic 00:01:50.354 ==> default: -- Feature: pae 00:01:50.354 ==> default: -- Memory: 12288M 00:01:50.354 ==> default: -- Memory Backing: hugepages: 00:01:50.354 ==> default: -- Management MAC: 00:01:50.354 ==> default: -- Loader: 00:01:50.354 ==> default: -- Nvram: 00:01:50.354 ==> default: -- Base box: spdk/fedora39 00:01:50.354 ==> default: -- Storage pool: default 00:01:50.355 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733828697_06045f1e7dcc620e67fc.img (20G) 00:01:50.355 ==> default: -- Volume Cache: default 00:01:50.355 ==> default: -- Kernel: 00:01:50.355 ==> default: -- Initrd: 00:01:50.355 ==> default: -- Graphics Type: vnc 00:01:50.355 ==> default: -- Graphics Port: -1 00:01:50.355 ==> default: -- Graphics IP: 127.0.0.1 00:01:50.355 ==> default: -- Graphics Password: Not defined 00:01:50.355 ==> default: -- Video Type: cirrus 00:01:50.355 ==> default: -- Video VRAM: 9216 00:01:50.355 ==> default: -- Sound Type: 00:01:50.355 ==> default: -- Keymap: en-us 00:01:50.355 ==> default: -- TPM Path: 00:01:50.355 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:50.355 ==> default: -- Command line args: 00:01:50.355 ==> default: -> value=-device, 00:01:50.355 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:50.355 ==> default: -> value=-drive, 00:01:50.355 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:50.355 ==> default: -> value=-device, 00:01:50.355 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:50.355 ==> default: -> value=-device, 00:01:50.355 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:50.355 ==> default: -> value=-drive, 00:01:50.355 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:50.355 ==> default: -> value=-device, 00:01:50.355 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:50.355 ==> default: -> value=-drive, 00:01:50.355 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:50.355 ==> default: -> value=-device, 00:01:50.355 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:50.355 ==> default: -> value=-drive, 00:01:50.355 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:50.355 ==> default: -> value=-device, 00:01:50.355 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:50.613 ==> default: Creating shared folders metadata... 00:01:50.613 ==> default: Starting domain. 00:01:51.993 ==> default: Waiting for domain to get an IP address... 00:02:06.871 ==> default: Waiting for SSH to become available... 00:02:08.249 ==> default: Configuring and enabling network interfaces... 00:02:12.433 default: SSH address: 192.168.121.228:22 00:02:12.433 default: SSH username: vagrant 00:02:12.433 default: SSH auth method: private key 00:02:14.361 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:22.474 ==> default: Mounting SSHFS shared folder... 00:02:23.411 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:23.411 ==> default: Checking Mount.. 00:02:24.788 ==> default: Folder Successfully Mounted! 00:02:24.788 ==> default: Running provisioner: file... 00:02:25.356 default: ~/.gitconfig => .gitconfig 00:02:25.615 00:02:25.615 SUCCESS! 00:02:25.615 00:02:25.615 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:25.615 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:25.615 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:25.615 00:02:25.624 [Pipeline] } 00:02:25.640 [Pipeline] // stage 00:02:25.650 [Pipeline] dir 00:02:25.651 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:25.653 [Pipeline] { 00:02:25.667 [Pipeline] catchError 00:02:25.669 [Pipeline] { 00:02:25.682 [Pipeline] sh 00:02:25.964 + vagrant ssh-config --host vagrant 00:02:25.964 + sed -ne /^Host/,$p 00:02:25.964 + tee ssh_conf 00:02:30.219 Host vagrant 00:02:30.219 HostName 192.168.121.228 00:02:30.219 User vagrant 00:02:30.219 Port 22 00:02:30.219 UserKnownHostsFile /dev/null 00:02:30.219 StrictHostKeyChecking no 00:02:30.219 PasswordAuthentication no 00:02:30.219 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:30.219 IdentitiesOnly yes 00:02:30.219 LogLevel FATAL 00:02:30.219 ForwardAgent yes 00:02:30.219 ForwardX11 yes 00:02:30.219 00:02:30.234 [Pipeline] withEnv 00:02:30.236 [Pipeline] { 00:02:30.251 [Pipeline] sh 00:02:30.532 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:30.532 source /etc/os-release 00:02:30.532 [[ -e /image.version ]] && img=$(< /image.version) 00:02:30.532 # Minimal, systemd-like check. 00:02:30.532 if [[ -e /.dockerenv ]]; then 00:02:30.532 # Clear garbage from the node's name: 00:02:30.532 # agt-er_autotest_547-896 -> autotest_547-896 00:02:30.532 # $HOSTNAME is the actual container id 00:02:30.532 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:30.532 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:30.532 # We can assume this is a mount from a host where container is running, 00:02:30.532 # so fetch its hostname to easily identify the target swarm worker. 00:02:30.532 container="$(< /etc/hostname) ($agent)" 00:02:30.532 else 00:02:30.532 # Fallback 00:02:30.532 container=$agent 00:02:30.532 fi 00:02:30.532 fi 00:02:30.532 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:30.532 00:02:30.803 [Pipeline] } 00:02:30.819 [Pipeline] // withEnv 00:02:30.829 [Pipeline] setCustomBuildProperty 00:02:30.845 [Pipeline] stage 00:02:30.847 [Pipeline] { (Tests) 00:02:30.866 [Pipeline] sh 00:02:31.147 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:31.423 [Pipeline] sh 00:02:31.705 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:31.981 [Pipeline] timeout 00:02:31.982 Timeout set to expire in 1 hr 0 min 00:02:31.984 [Pipeline] { 00:02:32.000 [Pipeline] sh 00:02:32.281 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:32.853 HEAD is now at 52a413487 bdev: do not retry nomem I/Os during aborting them 00:02:32.866 [Pipeline] sh 00:02:33.147 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:33.419 [Pipeline] sh 00:02:33.700 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:33.979 [Pipeline] sh 00:02:34.261 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:34.520 ++ readlink -f spdk_repo 00:02:34.520 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:34.520 + [[ -n /home/vagrant/spdk_repo ]] 00:02:34.520 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:34.520 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:34.520 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:34.520 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:34.520 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:34.520 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:34.520 + cd /home/vagrant/spdk_repo 00:02:34.520 + source /etc/os-release 00:02:34.520 ++ NAME='Fedora Linux' 00:02:34.520 ++ VERSION='39 (Cloud Edition)' 00:02:34.520 ++ ID=fedora 00:02:34.520 ++ VERSION_ID=39 00:02:34.520 ++ VERSION_CODENAME= 00:02:34.520 ++ PLATFORM_ID=platform:f39 00:02:34.520 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:34.520 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:34.520 ++ LOGO=fedora-logo-icon 00:02:34.520 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:34.520 ++ HOME_URL=https://fedoraproject.org/ 00:02:34.520 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:34.520 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:34.520 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:34.520 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:34.520 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:34.520 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:34.520 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:34.520 ++ SUPPORT_END=2024-11-12 00:02:34.520 ++ VARIANT='Cloud Edition' 00:02:34.520 ++ VARIANT_ID=cloud 00:02:34.520 + uname -a 00:02:34.520 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:34.520 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:34.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:34.816 Hugepages 00:02:34.816 node hugesize free / total 00:02:34.816 node0 1048576kB 0 / 0 00:02:34.816 node0 2048kB 0 / 0 00:02:34.816 00:02:34.816 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:35.083 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:35.083 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:35.084 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:35.084 + rm -f /tmp/spdk-ld-path 00:02:35.084 + source autorun-spdk.conf 00:02:35.084 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.084 ++ SPDK_TEST_NVMF=1 00:02:35.084 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:35.084 ++ SPDK_TEST_URING=1 00:02:35.084 ++ SPDK_TEST_VFIOUSER=1 00:02:35.084 ++ SPDK_TEST_USDT=1 00:02:35.084 ++ SPDK_RUN_ASAN=1 00:02:35.084 ++ SPDK_RUN_UBSAN=1 00:02:35.084 ++ NET_TYPE=virt 00:02:35.084 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.084 ++ RUN_NIGHTLY=1 00:02:35.084 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:35.084 + [[ -n '' ]] 00:02:35.084 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:35.084 + for M in /var/spdk/build-*-manifest.txt 00:02:35.084 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:35.084 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.084 + for M in /var/spdk/build-*-manifest.txt 00:02:35.084 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:35.084 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.084 + for M in /var/spdk/build-*-manifest.txt 00:02:35.084 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:35.084 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.084 ++ uname 00:02:35.084 + [[ Linux == \L\i\n\u\x ]] 00:02:35.084 + sudo dmesg -T 00:02:35.084 + sudo dmesg --clear 00:02:35.084 + dmesg_pid=5249 00:02:35.084 + [[ Fedora Linux == FreeBSD ]] 00:02:35.084 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.084 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.084 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:35.084 + sudo dmesg -Tw 00:02:35.084 + [[ -x /usr/src/fio-static/fio ]] 00:02:35.084 + export FIO_BIN=/usr/src/fio-static/fio 00:02:35.084 + FIO_BIN=/usr/src/fio-static/fio 00:02:35.084 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:35.084 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:35.084 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:35.084 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.084 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.084 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:35.084 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.084 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.084 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.084 11:05:41 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:35.084 11:05:41 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.084 11:05:41 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.084 11:05:41 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:35.084 11:05:41 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:35.084 11:05:41 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:35.084 11:05:41 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_VFIOUSER=1 00:02:35.084 11:05:41 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_TEST_USDT=1 00:02:35.084 11:05:41 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_RUN_ASAN=1 00:02:35.084 11:05:41 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_RUN_UBSAN=1 00:02:35.084 11:05:41 -- spdk_repo/autorun-spdk.conf@9 -- $ NET_TYPE=virt 00:02:35.084 11:05:41 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.084 11:05:41 -- spdk_repo/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:35.084 11:05:41 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:35.084 11:05:41 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.343 11:05:41 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:35.343 11:05:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:35.343 11:05:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:35.343 11:05:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:35.343 11:05:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.343 11:05:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.343 11:05:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.343 11:05:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.343 11:05:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.343 11:05:41 -- paths/export.sh@5 -- $ export PATH 00:02:35.344 11:05:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.344 11:05:41 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:35.344 11:05:41 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:35.344 11:05:41 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733828741.XXXXXX 00:02:35.344 11:05:41 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733828741.1vdEk0 00:02:35.344 11:05:41 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:35.344 11:05:41 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:35.344 11:05:41 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:35.344 11:05:41 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:35.344 11:05:41 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:35.344 11:05:41 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:35.344 11:05:41 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:35.344 11:05:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.344 11:05:41 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:02:35.344 11:05:41 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:35.344 11:05:41 -- pm/common@17 -- $ local monitor 00:02:35.344 11:05:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.344 11:05:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.344 11:05:41 -- pm/common@21 -- $ date +%s 00:02:35.344 11:05:41 -- pm/common@25 -- $ sleep 1 00:02:35.344 11:05:41 -- pm/common@21 -- $ date +%s 00:02:35.344 11:05:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733828741 00:02:35.344 11:05:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733828741 00:02:35.344 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733828741_collect-cpu-load.pm.log 00:02:35.344 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733828741_collect-vmstat.pm.log 00:02:36.281 11:05:42 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:36.281 11:05:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:36.281 11:05:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:36.281 11:05:42 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:36.281 11:05:42 -- spdk/autobuild.sh@16 -- $ date -u 00:02:36.281 Tue Dec 10 11:05:42 AM UTC 2024 00:02:36.281 11:05:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:36.281 v25.01-pre-324-g52a413487 00:02:36.281 11:05:42 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:36.281 11:05:42 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:36.281 11:05:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:36.281 11:05:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:36.281 11:05:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.281 ************************************ 00:02:36.281 START TEST asan 00:02:36.281 ************************************ 00:02:36.281 using asan 00:02:36.281 11:05:42 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:36.281 00:02:36.281 real 0m0.000s 00:02:36.281 user 0m0.000s 00:02:36.281 sys 0m0.000s 00:02:36.281 11:05:42 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:36.281 11:05:42 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.281 ************************************ 00:02:36.281 END TEST asan 00:02:36.281 ************************************ 00:02:36.281 11:05:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:36.281 11:05:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:36.281 11:05:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:36.281 11:05:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:36.281 11:05:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.281 ************************************ 00:02:36.281 START TEST ubsan 00:02:36.282 ************************************ 00:02:36.282 using ubsan 00:02:36.282 11:05:43 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:36.282 00:02:36.282 real 0m0.000s 00:02:36.282 user 0m0.000s 00:02:36.282 sys 0m0.000s 00:02:36.282 11:05:43 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:36.282 11:05:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.282 ************************************ 00:02:36.282 END TEST ubsan 00:02:36.282 ************************************ 00:02:36.282 11:05:43 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:36.282 11:05:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:36.282 11:05:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:36.282 11:05:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:36.282 11:05:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:36.282 11:05:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:36.282 11:05:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:36.282 11:05:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:36.282 11:05:43 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:02:36.540 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:36.540 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:37.107 Using 'verbs' RDMA provider 00:02:52.926 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:05.128 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:05.128 Creating mk/config.mk...done. 00:03:05.128 Creating mk/cc.flags.mk...done. 00:03:05.128 Type 'make' to build. 00:03:05.128 11:06:10 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:05.128 11:06:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:05.128 11:06:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:05.128 11:06:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.128 ************************************ 00:03:05.128 START TEST make 00:03:05.128 ************************************ 00:03:05.128 11:06:10 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:05.128 make[1]: Nothing to be done for 'all'. 00:03:06.088 The Meson build system 00:03:06.088 Version: 1.5.0 00:03:06.088 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:03:06.088 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:06.088 Build type: native build 00:03:06.088 Project name: libvfio-user 00:03:06.088 Project version: 0.0.1 00:03:06.088 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:06.088 C linker for the host machine: cc ld.bfd 2.40-14 00:03:06.088 Host machine cpu family: x86_64 00:03:06.088 Host machine cpu: x86_64 00:03:06.088 Run-time dependency threads found: YES 00:03:06.088 Library dl found: YES 00:03:06.088 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:06.088 Run-time dependency json-c found: YES 0.17 00:03:06.088 Run-time dependency cmocka found: YES 1.1.7 00:03:06.088 Program pytest-3 found: NO 00:03:06.088 Program flake8 found: NO 00:03:06.088 Program misspell-fixer found: NO 00:03:06.088 Program restructuredtext-lint found: NO 00:03:06.088 Program valgrind found: YES (/usr/bin/valgrind) 00:03:06.089 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:06.089 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:06.089 Compiler for C supports arguments -Wwrite-strings: YES 00:03:06.089 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:06.089 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:03:06.089 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:03:06.089 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:06.089 Build targets in project: 8 00:03:06.089 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:06.089 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:06.089 00:03:06.089 libvfio-user 0.0.1 00:03:06.089 00:03:06.089 User defined options 00:03:06.089 buildtype : debug 00:03:06.089 default_library: shared 00:03:06.089 libdir : /usr/local/lib 00:03:06.089 00:03:06.089 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:06.347 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:03:06.605 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:06.605 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:06.605 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:06.605 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:06.605 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:06.605 [6/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:06.605 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:06.605 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:06.605 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:06.605 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:06.605 [11/37] Compiling C object samples/null.p/null.c.o 00:03:06.605 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:06.863 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:06.863 [14/37] Compiling C object samples/server.p/server.c.o 00:03:06.864 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:06.864 [16/37] Compiling C object samples/client.p/client.c.o 00:03:06.864 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:06.864 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:06.864 [19/37] Linking target samples/client 00:03:06.864 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:06.864 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:06.864 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:06.864 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:06.864 [24/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:06.864 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:06.864 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:06.864 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:06.864 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:03:07.122 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:07.122 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:07.122 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:07.122 [32/37] Linking target test/unit_tests 00:03:07.122 [33/37] Linking target samples/server 00:03:07.122 [34/37] Linking target samples/shadow_ioeventfd_server 00:03:07.122 [35/37] Linking target samples/null 00:03:07.122 [36/37] Linking target samples/lspci 00:03:07.122 [37/37] Linking target samples/gpio-pci-idio-16 00:03:07.122 INFO: autodetecting backend as ninja 00:03:07.122 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:07.379 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:07.637 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:03:07.637 ninja: no work to do. 00:03:19.835 The Meson build system 00:03:19.835 Version: 1.5.0 00:03:19.835 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:19.835 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:19.835 Build type: native build 00:03:19.835 Program cat found: YES (/usr/bin/cat) 00:03:19.835 Project name: DPDK 00:03:19.835 Project version: 24.03.0 00:03:19.835 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:19.835 C linker for the host machine: cc ld.bfd 2.40-14 00:03:19.835 Host machine cpu family: x86_64 00:03:19.835 Host machine cpu: x86_64 00:03:19.835 Message: ## Building in Developer Mode ## 00:03:19.835 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:19.835 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:19.835 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:19.835 Program python3 found: YES (/usr/bin/python3) 00:03:19.835 Program cat found: YES (/usr/bin/cat) 00:03:19.835 Compiler for C supports arguments -march=native: YES 00:03:19.835 Checking for size of "void *" : 8 00:03:19.835 Checking for size of "void *" : 8 (cached) 00:03:19.835 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:19.835 Library m found: YES 00:03:19.835 Library numa found: YES 00:03:19.835 Has header "numaif.h" : YES 00:03:19.835 Library fdt found: NO 00:03:19.835 Library execinfo found: NO 00:03:19.835 Has header "execinfo.h" : YES 00:03:19.835 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:19.835 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:19.835 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:19.835 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:19.835 Run-time dependency openssl found: YES 3.1.1 00:03:19.835 Run-time dependency libpcap found: YES 1.10.4 00:03:19.835 Has header "pcap.h" with dependency libpcap: YES 00:03:19.835 Compiler for C supports arguments -Wcast-qual: YES 00:03:19.835 Compiler for C supports arguments -Wdeprecated: YES 00:03:19.835 Compiler for C supports arguments -Wformat: YES 00:03:19.835 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:19.835 Compiler for C supports arguments -Wformat-security: NO 00:03:19.835 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:19.835 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:19.835 Compiler for C supports arguments -Wnested-externs: YES 00:03:19.835 Compiler for C supports arguments -Wold-style-definition: YES 00:03:19.835 Compiler for C supports arguments -Wpointer-arith: YES 00:03:19.835 Compiler for C supports arguments -Wsign-compare: YES 00:03:19.835 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:19.835 Compiler for C supports arguments -Wundef: YES 00:03:19.835 Compiler for C supports arguments -Wwrite-strings: YES 00:03:19.835 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:19.835 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:19.835 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:19.835 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:19.835 Program objdump found: YES (/usr/bin/objdump) 00:03:19.835 Compiler for C supports arguments -mavx512f: YES 00:03:19.835 Checking if "AVX512 checking" compiles: YES 00:03:19.835 Fetching value of define "__SSE4_2__" : 1 00:03:19.835 Fetching value of define "__AES__" : 1 00:03:19.835 Fetching value of define "__AVX__" : 1 00:03:19.835 Fetching value of define "__AVX2__" : 1 00:03:19.835 Fetching value of define "__AVX512BW__" : (undefined) 00:03:19.835 Fetching value of define "__AVX512CD__" : (undefined) 00:03:19.835 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:19.835 Fetching value of define "__AVX512F__" : (undefined) 00:03:19.835 Fetching value of define "__AVX512VL__" : (undefined) 00:03:19.835 Fetching value of define "__PCLMUL__" : 1 00:03:19.835 Fetching value of define "__RDRND__" : 1 00:03:19.835 Fetching value of define "__RDSEED__" : 1 00:03:19.835 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:19.835 Fetching value of define "__znver1__" : (undefined) 00:03:19.835 Fetching value of define "__znver2__" : (undefined) 00:03:19.835 Fetching value of define "__znver3__" : (undefined) 00:03:19.835 Fetching value of define "__znver4__" : (undefined) 00:03:19.835 Library asan found: YES 00:03:19.835 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:19.835 Message: lib/log: Defining dependency "log" 00:03:19.835 Message: lib/kvargs: Defining dependency "kvargs" 00:03:19.835 Message: lib/telemetry: Defining dependency "telemetry" 00:03:19.835 Library rt found: YES 00:03:19.835 Checking for function "getentropy" : NO 00:03:19.835 Message: lib/eal: Defining dependency "eal" 00:03:19.835 Message: lib/ring: Defining dependency "ring" 00:03:19.835 Message: lib/rcu: Defining dependency "rcu" 00:03:19.835 Message: lib/mempool: Defining dependency "mempool" 00:03:19.835 Message: lib/mbuf: Defining dependency "mbuf" 00:03:19.835 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:19.835 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:19.835 Compiler for C supports arguments -mpclmul: YES 00:03:19.835 Compiler for C supports arguments -maes: YES 00:03:19.835 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:19.835 Compiler for C supports arguments -mavx512bw: YES 00:03:19.835 Compiler for C supports arguments -mavx512dq: YES 00:03:19.835 Compiler for C supports arguments -mavx512vl: YES 00:03:19.835 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:19.835 Compiler for C supports arguments -mavx2: YES 00:03:19.835 Compiler for C supports arguments -mavx: YES 00:03:19.835 Message: lib/net: Defining dependency "net" 00:03:19.835 Message: lib/meter: Defining dependency "meter" 00:03:19.835 Message: lib/ethdev: Defining dependency "ethdev" 00:03:19.835 Message: lib/pci: Defining dependency "pci" 00:03:19.835 Message: lib/cmdline: Defining dependency "cmdline" 00:03:19.835 Message: lib/hash: Defining dependency "hash" 00:03:19.835 Message: lib/timer: Defining dependency "timer" 00:03:19.835 Message: lib/compressdev: Defining dependency "compressdev" 00:03:19.835 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:19.835 Message: lib/dmadev: Defining dependency "dmadev" 00:03:19.835 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:19.835 Message: lib/power: Defining dependency "power" 00:03:19.835 Message: lib/reorder: Defining dependency "reorder" 00:03:19.835 Message: lib/security: Defining dependency "security" 00:03:19.835 Has header "linux/userfaultfd.h" : YES 00:03:19.835 Has header "linux/vduse.h" : YES 00:03:19.835 Message: lib/vhost: Defining dependency "vhost" 00:03:19.835 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:19.835 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:19.835 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:19.835 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:19.835 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:19.835 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:19.835 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:19.835 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:19.835 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:19.835 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:19.835 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:19.835 Configuring doxy-api-html.conf using configuration 00:03:19.835 Configuring doxy-api-man.conf using configuration 00:03:19.835 Program mandb found: YES (/usr/bin/mandb) 00:03:19.835 Program sphinx-build found: NO 00:03:19.835 Configuring rte_build_config.h using configuration 00:03:19.835 Message: 00:03:19.835 ================= 00:03:19.835 Applications Enabled 00:03:19.836 ================= 00:03:19.836 00:03:19.836 apps: 00:03:19.836 00:03:19.836 00:03:19.836 Message: 00:03:19.836 ================= 00:03:19.836 Libraries Enabled 00:03:19.836 ================= 00:03:19.836 00:03:19.836 libs: 00:03:19.836 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:19.836 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:19.836 cryptodev, dmadev, power, reorder, security, vhost, 00:03:19.836 00:03:19.836 Message: 00:03:19.836 =============== 00:03:19.836 Drivers Enabled 00:03:19.836 =============== 00:03:19.836 00:03:19.836 common: 00:03:19.836 00:03:19.836 bus: 00:03:19.836 pci, vdev, 00:03:19.836 mempool: 00:03:19.836 ring, 00:03:19.836 dma: 00:03:19.836 00:03:19.836 net: 00:03:19.836 00:03:19.836 crypto: 00:03:19.836 00:03:19.836 compress: 00:03:19.836 00:03:19.836 vdpa: 00:03:19.836 00:03:19.836 00:03:19.836 Message: 00:03:19.836 ================= 00:03:19.836 Content Skipped 00:03:19.836 ================= 00:03:19.836 00:03:19.836 apps: 00:03:19.836 dumpcap: explicitly disabled via build config 00:03:19.836 graph: explicitly disabled via build config 00:03:19.836 pdump: explicitly disabled via build config 00:03:19.836 proc-info: explicitly disabled via build config 00:03:19.836 test-acl: explicitly disabled via build config 00:03:19.836 test-bbdev: explicitly disabled via build config 00:03:19.836 test-cmdline: explicitly disabled via build config 00:03:19.836 test-compress-perf: explicitly disabled via build config 00:03:19.836 test-crypto-perf: explicitly disabled via build config 00:03:19.836 test-dma-perf: explicitly disabled via build config 00:03:19.836 test-eventdev: explicitly disabled via build config 00:03:19.836 test-fib: explicitly disabled via build config 00:03:19.836 test-flow-perf: explicitly disabled via build config 00:03:19.836 test-gpudev: explicitly disabled via build config 00:03:19.836 test-mldev: explicitly disabled via build config 00:03:19.836 test-pipeline: explicitly disabled via build config 00:03:19.836 test-pmd: explicitly disabled via build config 00:03:19.836 test-regex: explicitly disabled via build config 00:03:19.836 test-sad: explicitly disabled via build config 00:03:19.836 test-security-perf: explicitly disabled via build config 00:03:19.836 00:03:19.836 libs: 00:03:19.836 argparse: explicitly disabled via build config 00:03:19.836 metrics: explicitly disabled via build config 00:03:19.836 acl: explicitly disabled via build config 00:03:19.836 bbdev: explicitly disabled via build config 00:03:19.836 bitratestats: explicitly disabled via build config 00:03:19.836 bpf: explicitly disabled via build config 00:03:19.836 cfgfile: explicitly disabled via build config 00:03:19.836 distributor: explicitly disabled via build config 00:03:19.836 efd: explicitly disabled via build config 00:03:19.836 eventdev: explicitly disabled via build config 00:03:19.836 dispatcher: explicitly disabled via build config 00:03:19.836 gpudev: explicitly disabled via build config 00:03:19.836 gro: explicitly disabled via build config 00:03:19.836 gso: explicitly disabled via build config 00:03:19.836 ip_frag: explicitly disabled via build config 00:03:19.836 jobstats: explicitly disabled via build config 00:03:19.836 latencystats: explicitly disabled via build config 00:03:19.836 lpm: explicitly disabled via build config 00:03:19.836 member: explicitly disabled via build config 00:03:19.836 pcapng: explicitly disabled via build config 00:03:19.836 rawdev: explicitly disabled via build config 00:03:19.836 regexdev: explicitly disabled via build config 00:03:19.836 mldev: explicitly disabled via build config 00:03:19.836 rib: explicitly disabled via build config 00:03:19.836 sched: explicitly disabled via build config 00:03:19.836 stack: explicitly disabled via build config 00:03:19.836 ipsec: explicitly disabled via build config 00:03:19.836 pdcp: explicitly disabled via build config 00:03:19.836 fib: explicitly disabled via build config 00:03:19.836 port: explicitly disabled via build config 00:03:19.836 pdump: explicitly disabled via build config 00:03:19.836 table: explicitly disabled via build config 00:03:19.836 pipeline: explicitly disabled via build config 00:03:19.836 graph: explicitly disabled via build config 00:03:19.836 node: explicitly disabled via build config 00:03:19.836 00:03:19.836 drivers: 00:03:19.836 common/cpt: not in enabled drivers build config 00:03:19.836 common/dpaax: not in enabled drivers build config 00:03:19.836 common/iavf: not in enabled drivers build config 00:03:19.836 common/idpf: not in enabled drivers build config 00:03:19.836 common/ionic: not in enabled drivers build config 00:03:19.836 common/mvep: not in enabled drivers build config 00:03:19.836 common/octeontx: not in enabled drivers build config 00:03:19.836 bus/auxiliary: not in enabled drivers build config 00:03:19.836 bus/cdx: not in enabled drivers build config 00:03:19.836 bus/dpaa: not in enabled drivers build config 00:03:19.836 bus/fslmc: not in enabled drivers build config 00:03:19.836 bus/ifpga: not in enabled drivers build config 00:03:19.836 bus/platform: not in enabled drivers build config 00:03:19.836 bus/uacce: not in enabled drivers build config 00:03:19.836 bus/vmbus: not in enabled drivers build config 00:03:19.836 common/cnxk: not in enabled drivers build config 00:03:19.836 common/mlx5: not in enabled drivers build config 00:03:19.836 common/nfp: not in enabled drivers build config 00:03:19.836 common/nitrox: not in enabled drivers build config 00:03:19.836 common/qat: not in enabled drivers build config 00:03:19.836 common/sfc_efx: not in enabled drivers build config 00:03:19.836 mempool/bucket: not in enabled drivers build config 00:03:19.836 mempool/cnxk: not in enabled drivers build config 00:03:19.836 mempool/dpaa: not in enabled drivers build config 00:03:19.836 mempool/dpaa2: not in enabled drivers build config 00:03:19.836 mempool/octeontx: not in enabled drivers build config 00:03:19.836 mempool/stack: not in enabled drivers build config 00:03:19.836 dma/cnxk: not in enabled drivers build config 00:03:19.836 dma/dpaa: not in enabled drivers build config 00:03:19.836 dma/dpaa2: not in enabled drivers build config 00:03:19.836 dma/hisilicon: not in enabled drivers build config 00:03:19.836 dma/idxd: not in enabled drivers build config 00:03:19.836 dma/ioat: not in enabled drivers build config 00:03:19.836 dma/skeleton: not in enabled drivers build config 00:03:19.836 net/af_packet: not in enabled drivers build config 00:03:19.836 net/af_xdp: not in enabled drivers build config 00:03:19.836 net/ark: not in enabled drivers build config 00:03:19.836 net/atlantic: not in enabled drivers build config 00:03:19.836 net/avp: not in enabled drivers build config 00:03:19.836 net/axgbe: not in enabled drivers build config 00:03:19.836 net/bnx2x: not in enabled drivers build config 00:03:19.836 net/bnxt: not in enabled drivers build config 00:03:19.836 net/bonding: not in enabled drivers build config 00:03:19.836 net/cnxk: not in enabled drivers build config 00:03:19.836 net/cpfl: not in enabled drivers build config 00:03:19.836 net/cxgbe: not in enabled drivers build config 00:03:19.836 net/dpaa: not in enabled drivers build config 00:03:19.836 net/dpaa2: not in enabled drivers build config 00:03:19.836 net/e1000: not in enabled drivers build config 00:03:19.836 net/ena: not in enabled drivers build config 00:03:19.836 net/enetc: not in enabled drivers build config 00:03:19.836 net/enetfec: not in enabled drivers build config 00:03:19.836 net/enic: not in enabled drivers build config 00:03:19.836 net/failsafe: not in enabled drivers build config 00:03:19.836 net/fm10k: not in enabled drivers build config 00:03:19.836 net/gve: not in enabled drivers build config 00:03:19.836 net/hinic: not in enabled drivers build config 00:03:19.836 net/hns3: not in enabled drivers build config 00:03:19.836 net/i40e: not in enabled drivers build config 00:03:19.836 net/iavf: not in enabled drivers build config 00:03:19.836 net/ice: not in enabled drivers build config 00:03:19.836 net/idpf: not in enabled drivers build config 00:03:19.836 net/igc: not in enabled drivers build config 00:03:19.836 net/ionic: not in enabled drivers build config 00:03:19.836 net/ipn3ke: not in enabled drivers build config 00:03:19.836 net/ixgbe: not in enabled drivers build config 00:03:19.836 net/mana: not in enabled drivers build config 00:03:19.836 net/memif: not in enabled drivers build config 00:03:19.836 net/mlx4: not in enabled drivers build config 00:03:19.836 net/mlx5: not in enabled drivers build config 00:03:19.836 net/mvneta: not in enabled drivers build config 00:03:19.836 net/mvpp2: not in enabled drivers build config 00:03:19.836 net/netvsc: not in enabled drivers build config 00:03:19.836 net/nfb: not in enabled drivers build config 00:03:19.836 net/nfp: not in enabled drivers build config 00:03:19.836 net/ngbe: not in enabled drivers build config 00:03:19.836 net/null: not in enabled drivers build config 00:03:19.836 net/octeontx: not in enabled drivers build config 00:03:19.836 net/octeon_ep: not in enabled drivers build config 00:03:19.836 net/pcap: not in enabled drivers build config 00:03:19.836 net/pfe: not in enabled drivers build config 00:03:19.836 net/qede: not in enabled drivers build config 00:03:19.836 net/ring: not in enabled drivers build config 00:03:19.836 net/sfc: not in enabled drivers build config 00:03:19.836 net/softnic: not in enabled drivers build config 00:03:19.836 net/tap: not in enabled drivers build config 00:03:19.836 net/thunderx: not in enabled drivers build config 00:03:19.836 net/txgbe: not in enabled drivers build config 00:03:19.836 net/vdev_netvsc: not in enabled drivers build config 00:03:19.836 net/vhost: not in enabled drivers build config 00:03:19.836 net/virtio: not in enabled drivers build config 00:03:19.836 net/vmxnet3: not in enabled drivers build config 00:03:19.836 raw/*: missing internal dependency, "rawdev" 00:03:19.836 crypto/armv8: not in enabled drivers build config 00:03:19.836 crypto/bcmfs: not in enabled drivers build config 00:03:19.836 crypto/caam_jr: not in enabled drivers build config 00:03:19.836 crypto/ccp: not in enabled drivers build config 00:03:19.836 crypto/cnxk: not in enabled drivers build config 00:03:19.836 crypto/dpaa_sec: not in enabled drivers build config 00:03:19.836 crypto/dpaa2_sec: not in enabled drivers build config 00:03:19.836 crypto/ipsec_mb: not in enabled drivers build config 00:03:19.836 crypto/mlx5: not in enabled drivers build config 00:03:19.836 crypto/mvsam: not in enabled drivers build config 00:03:19.836 crypto/nitrox: not in enabled drivers build config 00:03:19.836 crypto/null: not in enabled drivers build config 00:03:19.836 crypto/octeontx: not in enabled drivers build config 00:03:19.836 crypto/openssl: not in enabled drivers build config 00:03:19.836 crypto/scheduler: not in enabled drivers build config 00:03:19.836 crypto/uadk: not in enabled drivers build config 00:03:19.836 crypto/virtio: not in enabled drivers build config 00:03:19.836 compress/isal: not in enabled drivers build config 00:03:19.837 compress/mlx5: not in enabled drivers build config 00:03:19.837 compress/nitrox: not in enabled drivers build config 00:03:19.837 compress/octeontx: not in enabled drivers build config 00:03:19.837 compress/zlib: not in enabled drivers build config 00:03:19.837 regex/*: missing internal dependency, "regexdev" 00:03:19.837 ml/*: missing internal dependency, "mldev" 00:03:19.837 vdpa/ifc: not in enabled drivers build config 00:03:19.837 vdpa/mlx5: not in enabled drivers build config 00:03:19.837 vdpa/nfp: not in enabled drivers build config 00:03:19.837 vdpa/sfc: not in enabled drivers build config 00:03:19.837 event/*: missing internal dependency, "eventdev" 00:03:19.837 baseband/*: missing internal dependency, "bbdev" 00:03:19.837 gpu/*: missing internal dependency, "gpudev" 00:03:19.837 00:03:19.837 00:03:19.837 Build targets in project: 85 00:03:19.837 00:03:19.837 DPDK 24.03.0 00:03:19.837 00:03:19.837 User defined options 00:03:19.837 buildtype : debug 00:03:19.837 default_library : shared 00:03:19.837 libdir : lib 00:03:19.837 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:19.837 b_sanitize : address 00:03:19.837 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:19.837 c_link_args : 00:03:19.837 cpu_instruction_set: native 00:03:19.837 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:19.837 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:19.837 enable_docs : false 00:03:19.837 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:19.837 enable_kmods : false 00:03:19.837 max_lcores : 128 00:03:19.837 tests : false 00:03:19.837 00:03:19.837 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:19.837 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:19.837 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:19.837 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:19.837 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:19.837 [4/268] Linking static target lib/librte_log.a 00:03:19.837 [5/268] Linking static target lib/librte_kvargs.a 00:03:19.837 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:19.837 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.837 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:19.837 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:20.095 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:20.095 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:20.095 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:20.095 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:20.095 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:20.354 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:20.354 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.354 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:20.354 [18/268] Linking static target lib/librte_telemetry.a 00:03:20.354 [19/268] Linking target lib/librte_log.so.24.1 00:03:20.354 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:20.613 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:20.613 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:21.180 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:21.180 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:21.180 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:21.180 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:21.180 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:21.180 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:21.180 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:21.180 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:21.180 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.180 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:21.438 [33/268] Linking target lib/librte_telemetry.so.24.1 00:03:21.438 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:21.697 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:21.697 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:21.697 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:21.955 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:21.955 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:21.955 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:21.955 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:22.214 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:22.214 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:22.214 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:22.214 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:22.472 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:22.472 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:22.730 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:22.730 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:22.730 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:22.730 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:22.730 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:22.989 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:22.989 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:23.247 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:23.247 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:23.247 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:23.247 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:23.505 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:23.505 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:23.505 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:23.505 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:23.505 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:23.505 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:23.762 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:23.762 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:24.020 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:24.020 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:24.279 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:24.279 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:24.537 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:24.537 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:24.537 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:24.537 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:24.537 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:24.537 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:24.537 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:24.537 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:24.796 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:24.796 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:24.796 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:24.796 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:25.363 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:25.363 [84/268] Linking static target lib/librte_ring.a 00:03:25.363 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:25.363 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:25.363 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:25.363 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:25.363 [89/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:25.363 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:25.621 [91/268] Linking static target lib/librte_eal.a 00:03:25.621 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:25.621 [93/268] Linking static target lib/librte_mempool.a 00:03:25.879 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.879 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:26.137 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:26.137 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:26.137 [98/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:26.137 [99/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:26.137 [100/268] Linking static target lib/librte_rcu.a 00:03:26.137 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:26.395 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:26.395 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:26.653 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:26.653 [105/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.653 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:26.653 [107/268] Linking static target lib/librte_meter.a 00:03:26.912 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:26.912 [109/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.912 [110/268] Linking static target lib/librte_mbuf.a 00:03:26.912 [111/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:26.912 [112/268] Linking static target lib/librte_net.a 00:03:27.170 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:27.170 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:27.170 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:27.170 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.428 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.428 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:27.994 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:27.994 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:28.253 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.253 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:28.517 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:28.517 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:28.517 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:28.517 [126/268] Linking static target lib/librte_pci.a 00:03:28.776 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:29.038 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:29.038 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:29.038 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:29.038 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:29.038 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.038 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:29.038 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:29.038 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:29.038 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:29.296 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:29.296 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:29.296 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:29.296 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:29.296 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:29.296 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:29.296 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:29.296 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:29.554 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:29.811 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:30.070 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:30.070 [148/268] Linking static target lib/librte_cmdline.a 00:03:30.070 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:30.070 [150/268] Linking static target lib/librte_timer.a 00:03:30.070 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:30.070 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:30.636 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:30.636 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:30.636 [155/268] Linking static target lib/librte_ethdev.a 00:03:30.636 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:30.636 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:30.895 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.895 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:30.895 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:31.153 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:31.153 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:31.153 [163/268] Linking static target lib/librte_compressdev.a 00:03:31.153 [164/268] Linking static target lib/librte_hash.a 00:03:31.411 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:31.411 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:31.669 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:31.669 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:31.669 [169/268] Linking static target lib/librte_dmadev.a 00:03:31.669 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.669 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:31.927 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:32.185 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:32.185 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.442 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:32.442 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:32.442 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.699 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.699 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:32.699 [180/268] Linking static target lib/librte_cryptodev.a 00:03:32.699 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:32.699 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:32.699 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:32.957 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:33.215 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:33.215 [186/268] Linking static target lib/librte_power.a 00:03:33.473 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:33.473 [188/268] Linking static target lib/librte_reorder.a 00:03:33.473 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:33.473 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:33.731 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:33.989 [192/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.989 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:33.989 [194/268] Linking static target lib/librte_security.a 00:03:34.247 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:34.247 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.812 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.812 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:34.812 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:35.070 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:35.070 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:35.070 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:35.329 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.329 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:35.894 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:35.894 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:35.895 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:35.895 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:35.895 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:35.895 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:35.895 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:36.152 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:36.152 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:36.152 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:36.152 [215/268] Linking static target drivers/librte_bus_vdev.a 00:03:36.152 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:36.152 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:36.152 [218/268] Linking static target drivers/librte_bus_pci.a 00:03:36.152 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:36.410 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:36.410 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:36.410 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.668 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:36.668 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:36.668 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:36.668 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:36.668 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.602 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:37.602 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.861 [230/268] Linking target lib/librte_eal.so.24.1 00:03:37.861 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:38.120 [232/268] Linking target lib/librte_ring.so.24.1 00:03:38.120 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:38.120 [234/268] Linking target lib/librte_timer.so.24.1 00:03:38.120 [235/268] Linking target lib/librte_pci.so.24.1 00:03:38.120 [236/268] Linking target lib/librte_dmadev.so.24.1 00:03:38.120 [237/268] Linking target lib/librte_meter.so.24.1 00:03:38.120 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:38.120 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:38.120 [240/268] Linking target lib/librte_rcu.so.24.1 00:03:38.120 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:38.120 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:38.380 [243/268] Linking target lib/librte_mempool.so.24.1 00:03:38.380 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:38.380 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:38.380 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:38.380 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:38.380 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:38.380 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:38.642 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:38.642 [251/268] Linking target lib/librte_reorder.so.24.1 00:03:38.642 [252/268] Linking target lib/librte_net.so.24.1 00:03:38.643 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:03:38.643 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:38.901 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:38.901 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:38.901 [257/268] Linking target lib/librte_cmdline.so.24.1 00:03:38.901 [258/268] Linking target lib/librte_hash.so.24.1 00:03:38.901 [259/268] Linking target lib/librte_security.so.24.1 00:03:38.901 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.901 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:38.901 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:39.159 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:39.159 [264/268] Linking target lib/librte_power.so.24.1 00:03:42.445 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:42.445 [266/268] Linking static target lib/librte_vhost.a 00:03:44.348 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.348 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:44.348 INFO: autodetecting backend as ninja 00:03:44.348 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:06.278 CC lib/ut_mock/mock.o 00:04:06.278 CC lib/log/log.o 00:04:06.278 CC lib/log/log_flags.o 00:04:06.278 CC lib/log/log_deprecated.o 00:04:06.278 CC lib/ut/ut.o 00:04:06.278 LIB libspdk_ut.a 00:04:06.278 LIB libspdk_ut_mock.a 00:04:06.278 LIB libspdk_log.a 00:04:06.278 SO libspdk_ut.so.2.0 00:04:06.278 SO libspdk_ut_mock.so.6.0 00:04:06.278 SO libspdk_log.so.7.1 00:04:06.278 SYMLINK libspdk_ut.so 00:04:06.278 SYMLINK libspdk_ut_mock.so 00:04:06.278 SYMLINK libspdk_log.so 00:04:06.278 CC lib/util/bit_array.o 00:04:06.278 CC lib/util/base64.o 00:04:06.278 CC lib/util/cpuset.o 00:04:06.278 CC lib/util/crc16.o 00:04:06.278 CC lib/util/crc32.o 00:04:06.278 CC lib/util/crc32c.o 00:04:06.278 CXX lib/trace_parser/trace.o 00:04:06.278 CC lib/dma/dma.o 00:04:06.278 CC lib/ioat/ioat.o 00:04:06.278 CC lib/vfio_user/host/vfio_user_pci.o 00:04:06.278 CC lib/util/crc32_ieee.o 00:04:06.278 CC lib/util/crc64.o 00:04:06.278 CC lib/vfio_user/host/vfio_user.o 00:04:06.278 CC lib/util/dif.o 00:04:06.278 CC lib/util/fd.o 00:04:06.278 CC lib/util/fd_group.o 00:04:06.278 LIB libspdk_dma.a 00:04:06.278 CC lib/util/file.o 00:04:06.278 SO libspdk_dma.so.5.0 00:04:06.278 CC lib/util/hexlify.o 00:04:06.278 CC lib/util/iov.o 00:04:06.278 SYMLINK libspdk_dma.so 00:04:06.278 CC lib/util/math.o 00:04:06.278 CC lib/util/net.o 00:04:06.278 LIB libspdk_ioat.a 00:04:06.278 SO libspdk_ioat.so.7.0 00:04:06.278 CC lib/util/pipe.o 00:04:06.278 CC lib/util/strerror_tls.o 00:04:06.278 SYMLINK libspdk_ioat.so 00:04:06.278 LIB libspdk_vfio_user.a 00:04:06.278 CC lib/util/string.o 00:04:06.278 SO libspdk_vfio_user.so.5.0 00:04:06.278 CC lib/util/uuid.o 00:04:06.278 CC lib/util/xor.o 00:04:06.278 SYMLINK libspdk_vfio_user.so 00:04:06.278 CC lib/util/zipf.o 00:04:06.278 CC lib/util/md5.o 00:04:06.278 LIB libspdk_util.a 00:04:06.278 SO libspdk_util.so.10.1 00:04:06.278 LIB libspdk_trace_parser.a 00:04:06.278 SO libspdk_trace_parser.so.6.0 00:04:06.278 SYMLINK libspdk_util.so 00:04:06.278 SYMLINK libspdk_trace_parser.so 00:04:06.278 CC lib/rdma_utils/rdma_utils.o 00:04:06.278 CC lib/conf/conf.o 00:04:06.278 CC lib/vmd/vmd.o 00:04:06.278 CC lib/vmd/led.o 00:04:06.278 CC lib/env_dpdk/env.o 00:04:06.278 CC lib/env_dpdk/memory.o 00:04:06.278 CC lib/env_dpdk/pci.o 00:04:06.278 CC lib/json/json_parse.o 00:04:06.278 CC lib/env_dpdk/init.o 00:04:06.278 CC lib/idxd/idxd.o 00:04:06.278 CC lib/env_dpdk/threads.o 00:04:06.278 LIB libspdk_conf.a 00:04:06.278 SO libspdk_conf.so.6.0 00:04:06.278 CC lib/json/json_util.o 00:04:06.278 LIB libspdk_rdma_utils.a 00:04:06.278 CC lib/env_dpdk/pci_ioat.o 00:04:06.278 SYMLINK libspdk_conf.so 00:04:06.278 SO libspdk_rdma_utils.so.1.0 00:04:06.278 CC lib/env_dpdk/pci_virtio.o 00:04:06.278 SYMLINK libspdk_rdma_utils.so 00:04:06.278 CC lib/idxd/idxd_user.o 00:04:06.278 CC lib/idxd/idxd_kernel.o 00:04:06.278 CC lib/env_dpdk/pci_vmd.o 00:04:06.278 CC lib/env_dpdk/pci_idxd.o 00:04:06.278 CC lib/json/json_write.o 00:04:06.278 CC lib/env_dpdk/pci_event.o 00:04:06.278 CC lib/env_dpdk/sigbus_handler.o 00:04:06.278 CC lib/rdma_provider/common.o 00:04:06.278 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:06.278 CC lib/env_dpdk/pci_dpdk.o 00:04:06.278 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:06.278 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:06.278 LIB libspdk_idxd.a 00:04:06.278 LIB libspdk_vmd.a 00:04:06.278 SO libspdk_idxd.so.12.1 00:04:06.278 SO libspdk_vmd.so.6.0 00:04:06.278 SYMLINK libspdk_idxd.so 00:04:06.278 LIB libspdk_rdma_provider.a 00:04:06.278 LIB libspdk_json.a 00:04:06.278 SYMLINK libspdk_vmd.so 00:04:06.278 SO libspdk_rdma_provider.so.7.0 00:04:06.278 SO libspdk_json.so.6.0 00:04:06.278 SYMLINK libspdk_rdma_provider.so 00:04:06.278 SYMLINK libspdk_json.so 00:04:06.536 CC lib/jsonrpc/jsonrpc_server.o 00:04:06.536 CC lib/jsonrpc/jsonrpc_client.o 00:04:06.536 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:06.536 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:06.793 LIB libspdk_jsonrpc.a 00:04:06.793 SO libspdk_jsonrpc.so.6.0 00:04:06.793 SYMLINK libspdk_jsonrpc.so 00:04:07.051 LIB libspdk_env_dpdk.a 00:04:07.051 SO libspdk_env_dpdk.so.15.1 00:04:07.051 CC lib/rpc/rpc.o 00:04:07.309 SYMLINK libspdk_env_dpdk.so 00:04:07.309 LIB libspdk_rpc.a 00:04:07.309 SO libspdk_rpc.so.6.0 00:04:07.568 SYMLINK libspdk_rpc.so 00:04:07.826 CC lib/keyring/keyring_rpc.o 00:04:07.826 CC lib/keyring/keyring.o 00:04:07.826 CC lib/notify/notify.o 00:04:07.826 CC lib/notify/notify_rpc.o 00:04:07.826 CC lib/trace/trace.o 00:04:07.826 CC lib/trace/trace_rpc.o 00:04:07.826 CC lib/trace/trace_flags.o 00:04:07.826 LIB libspdk_notify.a 00:04:07.826 SO libspdk_notify.so.6.0 00:04:08.096 LIB libspdk_keyring.a 00:04:08.096 SYMLINK libspdk_notify.so 00:04:08.096 SO libspdk_keyring.so.2.0 00:04:08.096 LIB libspdk_trace.a 00:04:08.096 SYMLINK libspdk_keyring.so 00:04:08.096 SO libspdk_trace.so.11.0 00:04:08.096 SYMLINK libspdk_trace.so 00:04:08.355 CC lib/sock/sock_rpc.o 00:04:08.355 CC lib/sock/sock.o 00:04:08.355 CC lib/thread/thread.o 00:04:08.355 CC lib/thread/iobuf.o 00:04:08.923 LIB libspdk_sock.a 00:04:09.182 SO libspdk_sock.so.10.0 00:04:09.182 SYMLINK libspdk_sock.so 00:04:09.448 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:09.448 CC lib/nvme/nvme_fabric.o 00:04:09.448 CC lib/nvme/nvme_ctrlr.o 00:04:09.448 CC lib/nvme/nvme_ns_cmd.o 00:04:09.448 CC lib/nvme/nvme_ns.o 00:04:09.448 CC lib/nvme/nvme_pcie_common.o 00:04:09.448 CC lib/nvme/nvme_qpair.o 00:04:09.448 CC lib/nvme/nvme_pcie.o 00:04:09.448 CC lib/nvme/nvme.o 00:04:10.420 CC lib/nvme/nvme_quirks.o 00:04:10.420 CC lib/nvme/nvme_transport.o 00:04:10.420 CC lib/nvme/nvme_discovery.o 00:04:10.420 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:10.420 LIB libspdk_thread.a 00:04:10.420 SO libspdk_thread.so.11.0 00:04:10.679 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:10.679 CC lib/nvme/nvme_tcp.o 00:04:10.679 CC lib/nvme/nvme_opal.o 00:04:10.679 SYMLINK libspdk_thread.so 00:04:10.679 CC lib/nvme/nvme_io_msg.o 00:04:10.679 CC lib/nvme/nvme_poll_group.o 00:04:10.938 CC lib/nvme/nvme_zns.o 00:04:10.938 CC lib/nvme/nvme_stubs.o 00:04:11.197 CC lib/nvme/nvme_auth.o 00:04:11.197 CC lib/nvme/nvme_cuse.o 00:04:11.197 CC lib/nvme/nvme_vfio_user.o 00:04:11.197 CC lib/nvme/nvme_rdma.o 00:04:11.764 CC lib/accel/accel.o 00:04:11.764 CC lib/blob/blobstore.o 00:04:11.764 CC lib/init/json_config.o 00:04:11.764 CC lib/virtio/virtio.o 00:04:12.034 CC lib/virtio/virtio_vhost_user.o 00:04:12.034 CC lib/init/subsystem.o 00:04:12.291 CC lib/init/subsystem_rpc.o 00:04:12.292 CC lib/init/rpc.o 00:04:12.292 CC lib/accel/accel_rpc.o 00:04:12.292 CC lib/blob/request.o 00:04:12.292 LIB libspdk_init.a 00:04:12.292 CC lib/virtio/virtio_vfio_user.o 00:04:12.292 CC lib/vfu_tgt/tgt_endpoint.o 00:04:12.292 SO libspdk_init.so.6.0 00:04:12.550 CC lib/fsdev/fsdev.o 00:04:12.550 CC lib/fsdev/fsdev_io.o 00:04:12.550 SYMLINK libspdk_init.so 00:04:12.550 CC lib/fsdev/fsdev_rpc.o 00:04:12.550 CC lib/vfu_tgt/tgt_rpc.o 00:04:12.550 CC lib/virtio/virtio_pci.o 00:04:12.808 CC lib/event/app.o 00:04:12.808 CC lib/event/reactor.o 00:04:12.808 CC lib/event/log_rpc.o 00:04:12.808 LIB libspdk_vfu_tgt.a 00:04:12.808 SO libspdk_vfu_tgt.so.3.0 00:04:12.808 CC lib/event/app_rpc.o 00:04:13.067 SYMLINK libspdk_vfu_tgt.so 00:04:13.067 CC lib/blob/zeroes.o 00:04:13.067 CC lib/event/scheduler_static.o 00:04:13.067 LIB libspdk_virtio.a 00:04:13.067 SO libspdk_virtio.so.7.0 00:04:13.067 LIB libspdk_nvme.a 00:04:13.067 CC lib/accel/accel_sw.o 00:04:13.067 CC lib/blob/blob_bs_dev.o 00:04:13.067 SYMLINK libspdk_virtio.so 00:04:13.325 LIB libspdk_fsdev.a 00:04:13.325 SO libspdk_fsdev.so.2.0 00:04:13.325 LIB libspdk_event.a 00:04:13.325 SO libspdk_nvme.so.15.0 00:04:13.325 SYMLINK libspdk_fsdev.so 00:04:13.325 SO libspdk_event.so.14.0 00:04:13.584 SYMLINK libspdk_event.so 00:04:13.584 LIB libspdk_accel.a 00:04:13.584 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:13.584 SO libspdk_accel.so.16.0 00:04:13.584 SYMLINK libspdk_nvme.so 00:04:13.584 SYMLINK libspdk_accel.so 00:04:13.843 CC lib/bdev/bdev.o 00:04:13.843 CC lib/bdev/scsi_nvme.o 00:04:13.843 CC lib/bdev/bdev_rpc.o 00:04:13.843 CC lib/bdev/part.o 00:04:13.843 CC lib/bdev/bdev_zone.o 00:04:14.411 LIB libspdk_fuse_dispatcher.a 00:04:14.411 SO libspdk_fuse_dispatcher.so.1.0 00:04:14.411 SYMLINK libspdk_fuse_dispatcher.so 00:04:15.788 LIB libspdk_blob.a 00:04:15.788 SO libspdk_blob.so.12.0 00:04:16.046 SYMLINK libspdk_blob.so 00:04:16.305 CC lib/lvol/lvol.o 00:04:16.305 CC lib/blobfs/blobfs.o 00:04:16.305 CC lib/blobfs/tree.o 00:04:17.242 LIB libspdk_blobfs.a 00:04:17.242 LIB libspdk_lvol.a 00:04:17.500 SO libspdk_lvol.so.11.0 00:04:17.500 SO libspdk_blobfs.so.11.0 00:04:17.500 LIB libspdk_bdev.a 00:04:17.500 SYMLINK libspdk_lvol.so 00:04:17.500 SYMLINK libspdk_blobfs.so 00:04:17.500 SO libspdk_bdev.so.17.0 00:04:17.763 SYMLINK libspdk_bdev.so 00:04:17.763 CC lib/scsi/dev.o 00:04:17.763 CC lib/nvmf/ctrlr_discovery.o 00:04:17.763 CC lib/nvmf/ctrlr.o 00:04:17.763 CC lib/scsi/lun.o 00:04:17.763 CC lib/scsi/port.o 00:04:17.763 CC lib/nvmf/ctrlr_bdev.o 00:04:17.763 CC lib/nbd/nbd.o 00:04:17.763 CC lib/scsi/scsi.o 00:04:17.763 CC lib/ublk/ublk.o 00:04:17.764 CC lib/ftl/ftl_core.o 00:04:18.023 CC lib/ftl/ftl_init.o 00:04:18.023 CC lib/ftl/ftl_layout.o 00:04:18.023 CC lib/ftl/ftl_debug.o 00:04:18.281 CC lib/nbd/nbd_rpc.o 00:04:18.282 CC lib/scsi/scsi_bdev.o 00:04:18.282 CC lib/ftl/ftl_io.o 00:04:18.540 CC lib/ftl/ftl_sb.o 00:04:18.540 CC lib/scsi/scsi_pr.o 00:04:18.540 CC lib/scsi/scsi_rpc.o 00:04:18.540 LIB libspdk_nbd.a 00:04:18.540 CC lib/scsi/task.o 00:04:18.540 SO libspdk_nbd.so.7.0 00:04:18.540 SYMLINK libspdk_nbd.so 00:04:18.540 CC lib/ftl/ftl_l2p.o 00:04:18.540 CC lib/ublk/ublk_rpc.o 00:04:18.540 CC lib/ftl/ftl_l2p_flat.o 00:04:18.540 CC lib/ftl/ftl_nv_cache.o 00:04:18.798 CC lib/ftl/ftl_band.o 00:04:18.798 CC lib/nvmf/subsystem.o 00:04:18.798 LIB libspdk_ublk.a 00:04:18.798 CC lib/nvmf/nvmf.o 00:04:18.798 CC lib/ftl/ftl_band_ops.o 00:04:18.798 CC lib/nvmf/nvmf_rpc.o 00:04:18.798 SO libspdk_ublk.so.3.0 00:04:18.798 CC lib/ftl/ftl_writer.o 00:04:18.798 LIB libspdk_scsi.a 00:04:19.057 SYMLINK libspdk_ublk.so 00:04:19.057 CC lib/ftl/ftl_rq.o 00:04:19.057 SO libspdk_scsi.so.9.0 00:04:19.057 CC lib/ftl/ftl_reloc.o 00:04:19.057 CC lib/ftl/ftl_l2p_cache.o 00:04:19.315 SYMLINK libspdk_scsi.so 00:04:19.315 CC lib/ftl/ftl_p2l.o 00:04:19.315 CC lib/ftl/ftl_p2l_log.o 00:04:19.315 CC lib/iscsi/conn.o 00:04:19.573 CC lib/ftl/mngt/ftl_mngt.o 00:04:19.573 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:19.832 CC lib/vhost/vhost.o 00:04:19.832 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:19.832 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:19.832 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:19.832 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:20.090 CC lib/vhost/vhost_rpc.o 00:04:20.090 CC lib/vhost/vhost_scsi.o 00:04:20.090 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:20.090 CC lib/iscsi/init_grp.o 00:04:20.090 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:20.347 CC lib/vhost/vhost_blk.o 00:04:20.347 CC lib/vhost/rte_vhost_user.o 00:04:20.347 CC lib/nvmf/transport.o 00:04:20.347 CC lib/iscsi/iscsi.o 00:04:20.347 CC lib/nvmf/tcp.o 00:04:20.347 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:20.347 CC lib/iscsi/param.o 00:04:20.602 CC lib/iscsi/portal_grp.o 00:04:20.602 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:20.860 CC lib/iscsi/tgt_node.o 00:04:20.860 CC lib/iscsi/iscsi_subsystem.o 00:04:20.860 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:21.117 CC lib/iscsi/iscsi_rpc.o 00:04:21.117 CC lib/iscsi/task.o 00:04:21.117 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:21.117 CC lib/nvmf/stubs.o 00:04:21.375 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:21.375 CC lib/ftl/utils/ftl_conf.o 00:04:21.375 CC lib/nvmf/mdns_server.o 00:04:21.375 CC lib/ftl/utils/ftl_md.o 00:04:21.633 LIB libspdk_vhost.a 00:04:21.633 CC lib/nvmf/vfio_user.o 00:04:21.633 CC lib/nvmf/rdma.o 00:04:21.633 SO libspdk_vhost.so.8.0 00:04:21.633 CC lib/ftl/utils/ftl_mempool.o 00:04:21.633 CC lib/nvmf/auth.o 00:04:21.633 SYMLINK libspdk_vhost.so 00:04:21.633 CC lib/ftl/utils/ftl_bitmap.o 00:04:21.633 CC lib/ftl/utils/ftl_property.o 00:04:21.892 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:21.892 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:21.892 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:21.892 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:21.892 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:22.151 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:22.151 LIB libspdk_iscsi.a 00:04:22.151 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:22.151 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:22.151 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:22.151 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:22.151 SO libspdk_iscsi.so.8.0 00:04:22.527 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:22.527 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:22.527 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:22.527 CC lib/ftl/base/ftl_base_dev.o 00:04:22.527 CC lib/ftl/base/ftl_base_bdev.o 00:04:22.527 SYMLINK libspdk_iscsi.so 00:04:22.527 CC lib/ftl/ftl_trace.o 00:04:22.785 LIB libspdk_ftl.a 00:04:23.045 SO libspdk_ftl.so.9.0 00:04:23.304 SYMLINK libspdk_ftl.so 00:04:24.238 LIB libspdk_nvmf.a 00:04:24.238 SO libspdk_nvmf.so.20.0 00:04:24.496 SYMLINK libspdk_nvmf.so 00:04:25.062 CC module/vfu_device/vfu_virtio.o 00:04:25.062 CC module/env_dpdk/env_dpdk_rpc.o 00:04:25.062 CC module/sock/posix/posix.o 00:04:25.062 CC module/sock/uring/uring.o 00:04:25.062 CC module/keyring/file/keyring.o 00:04:25.062 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:25.062 CC module/fsdev/aio/fsdev_aio.o 00:04:25.062 CC module/accel/error/accel_error.o 00:04:25.062 CC module/keyring/linux/keyring.o 00:04:25.062 CC module/blob/bdev/blob_bdev.o 00:04:25.062 LIB libspdk_env_dpdk_rpc.a 00:04:25.062 SO libspdk_env_dpdk_rpc.so.6.0 00:04:25.062 SYMLINK libspdk_env_dpdk_rpc.so 00:04:25.062 CC module/keyring/file/keyring_rpc.o 00:04:25.322 CC module/keyring/linux/keyring_rpc.o 00:04:25.322 LIB libspdk_scheduler_dynamic.a 00:04:25.322 SO libspdk_scheduler_dynamic.so.4.0 00:04:25.322 CC module/accel/error/accel_error_rpc.o 00:04:25.322 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:25.322 LIB libspdk_keyring_linux.a 00:04:25.322 LIB libspdk_keyring_file.a 00:04:25.322 SYMLINK libspdk_scheduler_dynamic.so 00:04:25.322 CC module/vfu_device/vfu_virtio_blk.o 00:04:25.322 LIB libspdk_blob_bdev.a 00:04:25.322 SO libspdk_keyring_file.so.2.0 00:04:25.322 SO libspdk_keyring_linux.so.1.0 00:04:25.322 SO libspdk_blob_bdev.so.12.0 00:04:25.322 SYMLINK libspdk_keyring_file.so 00:04:25.588 CC module/vfu_device/vfu_virtio_scsi.o 00:04:25.588 SYMLINK libspdk_keyring_linux.so 00:04:25.588 SYMLINK libspdk_blob_bdev.so 00:04:25.588 CC module/vfu_device/vfu_virtio_rpc.o 00:04:25.588 LIB libspdk_accel_error.a 00:04:25.588 SO libspdk_accel_error.so.2.0 00:04:25.588 LIB libspdk_scheduler_dpdk_governor.a 00:04:25.588 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:25.588 SYMLINK libspdk_accel_error.so 00:04:25.588 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:25.588 CC module/vfu_device/vfu_virtio_fs.o 00:04:25.588 CC module/scheduler/gscheduler/gscheduler.o 00:04:25.870 CC module/accel/ioat/accel_ioat.o 00:04:25.870 CC module/accel/dsa/accel_dsa.o 00:04:25.870 LIB libspdk_scheduler_gscheduler.a 00:04:25.870 SO libspdk_scheduler_gscheduler.so.4.0 00:04:25.870 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:25.870 CC module/accel/ioat/accel_ioat_rpc.o 00:04:25.870 LIB libspdk_vfu_device.a 00:04:25.870 LIB libspdk_sock_uring.a 00:04:25.870 CC module/accel/iaa/accel_iaa.o 00:04:25.870 SYMLINK libspdk_scheduler_gscheduler.so 00:04:25.870 LIB libspdk_sock_posix.a 00:04:26.139 SO libspdk_vfu_device.so.3.0 00:04:26.139 SO libspdk_sock_uring.so.5.0 00:04:26.139 CC module/bdev/delay/vbdev_delay.o 00:04:26.139 SO libspdk_sock_posix.so.6.0 00:04:26.139 CC module/fsdev/aio/linux_aio_mgr.o 00:04:26.139 LIB libspdk_accel_ioat.a 00:04:26.139 CC module/accel/dsa/accel_dsa_rpc.o 00:04:26.139 SYMLINK libspdk_sock_uring.so 00:04:26.139 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:26.139 SO libspdk_accel_ioat.so.6.0 00:04:26.139 SYMLINK libspdk_vfu_device.so 00:04:26.139 SYMLINK libspdk_sock_posix.so 00:04:26.139 SYMLINK libspdk_accel_ioat.so 00:04:26.139 CC module/accel/iaa/accel_iaa_rpc.o 00:04:26.139 LIB libspdk_accel_dsa.a 00:04:26.139 CC module/blobfs/bdev/blobfs_bdev.o 00:04:26.139 SO libspdk_accel_dsa.so.5.0 00:04:26.139 CC module/bdev/error/vbdev_error.o 00:04:26.398 LIB libspdk_fsdev_aio.a 00:04:26.398 CC module/bdev/gpt/gpt.o 00:04:26.398 SO libspdk_fsdev_aio.so.1.0 00:04:26.398 LIB libspdk_accel_iaa.a 00:04:26.398 SYMLINK libspdk_accel_dsa.so 00:04:26.398 SO libspdk_accel_iaa.so.3.0 00:04:26.398 CC module/bdev/lvol/vbdev_lvol.o 00:04:26.398 CC module/bdev/malloc/bdev_malloc.o 00:04:26.398 SYMLINK libspdk_fsdev_aio.so 00:04:26.398 CC module/bdev/error/vbdev_error_rpc.o 00:04:26.398 SYMLINK libspdk_accel_iaa.so 00:04:26.398 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:26.398 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:26.398 LIB libspdk_bdev_delay.a 00:04:26.398 CC module/bdev/null/bdev_null.o 00:04:26.398 CC module/bdev/gpt/vbdev_gpt.o 00:04:26.398 SO libspdk_bdev_delay.so.6.0 00:04:26.398 CC module/bdev/nvme/bdev_nvme.o 00:04:26.656 SYMLINK libspdk_bdev_delay.so 00:04:26.656 CC module/bdev/null/bdev_null_rpc.o 00:04:26.656 LIB libspdk_bdev_error.a 00:04:26.656 SO libspdk_bdev_error.so.6.0 00:04:26.656 LIB libspdk_blobfs_bdev.a 00:04:26.656 SO libspdk_blobfs_bdev.so.6.0 00:04:26.656 SYMLINK libspdk_bdev_error.so 00:04:26.656 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:26.656 SYMLINK libspdk_blobfs_bdev.so 00:04:26.656 CC module/bdev/passthru/vbdev_passthru.o 00:04:26.656 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:26.915 LIB libspdk_bdev_null.a 00:04:26.915 LIB libspdk_bdev_gpt.a 00:04:26.915 SO libspdk_bdev_null.so.6.0 00:04:26.915 SO libspdk_bdev_gpt.so.6.0 00:04:26.915 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:26.915 SYMLINK libspdk_bdev_gpt.so 00:04:26.915 SYMLINK libspdk_bdev_null.so 00:04:26.915 CC module/bdev/nvme/nvme_rpc.o 00:04:26.915 CC module/bdev/raid/bdev_raid.o 00:04:26.915 CC module/bdev/raid/bdev_raid_rpc.o 00:04:26.915 LIB libspdk_bdev_lvol.a 00:04:27.173 LIB libspdk_bdev_malloc.a 00:04:27.173 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:27.173 CC module/bdev/split/vbdev_split.o 00:04:27.173 SO libspdk_bdev_lvol.so.6.0 00:04:27.173 SO libspdk_bdev_malloc.so.6.0 00:04:27.173 LIB libspdk_bdev_passthru.a 00:04:27.173 SYMLINK libspdk_bdev_lvol.so 00:04:27.173 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:27.173 SO libspdk_bdev_passthru.so.6.0 00:04:27.173 SYMLINK libspdk_bdev_malloc.so 00:04:27.173 SYMLINK libspdk_bdev_passthru.so 00:04:27.430 CC module/bdev/split/vbdev_split_rpc.o 00:04:27.430 CC module/bdev/uring/bdev_uring.o 00:04:27.430 CC module/bdev/ftl/bdev_ftl.o 00:04:27.430 CC module/bdev/aio/bdev_aio.o 00:04:27.430 LIB libspdk_bdev_zone_block.a 00:04:27.430 CC module/bdev/iscsi/bdev_iscsi.o 00:04:27.430 SO libspdk_bdev_zone_block.so.6.0 00:04:27.430 LIB libspdk_bdev_split.a 00:04:27.430 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:27.687 SO libspdk_bdev_split.so.6.0 00:04:27.687 SYMLINK libspdk_bdev_zone_block.so 00:04:27.687 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:27.687 CC module/bdev/uring/bdev_uring_rpc.o 00:04:27.687 SYMLINK libspdk_bdev_split.so 00:04:27.687 CC module/bdev/nvme/bdev_mdns_client.o 00:04:27.687 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:27.687 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:27.945 CC module/bdev/aio/bdev_aio_rpc.o 00:04:27.945 LIB libspdk_bdev_ftl.a 00:04:27.945 SO libspdk_bdev_ftl.so.6.0 00:04:27.945 LIB libspdk_bdev_uring.a 00:04:27.945 CC module/bdev/raid/bdev_raid_sb.o 00:04:27.945 SO libspdk_bdev_uring.so.6.0 00:04:27.945 CC module/bdev/raid/raid0.o 00:04:27.945 SYMLINK libspdk_bdev_ftl.so 00:04:27.945 CC module/bdev/nvme/vbdev_opal.o 00:04:27.945 LIB libspdk_bdev_iscsi.a 00:04:27.945 SYMLINK libspdk_bdev_uring.so 00:04:27.945 CC module/bdev/raid/raid1.o 00:04:27.945 LIB libspdk_bdev_aio.a 00:04:27.945 SO libspdk_bdev_iscsi.so.6.0 00:04:27.945 SO libspdk_bdev_aio.so.6.0 00:04:28.203 SYMLINK libspdk_bdev_iscsi.so 00:04:28.203 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:28.203 SYMLINK libspdk_bdev_aio.so 00:04:28.203 CC module/bdev/raid/concat.o 00:04:28.203 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:28.203 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:28.465 LIB libspdk_bdev_virtio.a 00:04:28.465 SO libspdk_bdev_virtio.so.6.0 00:04:28.465 LIB libspdk_bdev_raid.a 00:04:28.466 SYMLINK libspdk_bdev_virtio.so 00:04:28.466 SO libspdk_bdev_raid.so.6.0 00:04:28.466 SYMLINK libspdk_bdev_raid.so 00:04:29.845 LIB libspdk_bdev_nvme.a 00:04:29.845 SO libspdk_bdev_nvme.so.7.1 00:04:30.103 SYMLINK libspdk_bdev_nvme.so 00:04:30.362 CC module/event/subsystems/sock/sock.o 00:04:30.362 CC module/event/subsystems/iobuf/iobuf.o 00:04:30.362 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:30.362 CC module/event/subsystems/keyring/keyring.o 00:04:30.362 CC module/event/subsystems/fsdev/fsdev.o 00:04:30.362 CC module/event/subsystems/scheduler/scheduler.o 00:04:30.362 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:30.362 CC module/event/subsystems/vmd/vmd.o 00:04:30.362 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:30.362 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:30.621 LIB libspdk_event_vhost_blk.a 00:04:30.621 LIB libspdk_event_vfu_tgt.a 00:04:30.621 LIB libspdk_event_keyring.a 00:04:30.621 LIB libspdk_event_fsdev.a 00:04:30.621 LIB libspdk_event_sock.a 00:04:30.621 SO libspdk_event_vhost_blk.so.3.0 00:04:30.621 SO libspdk_event_vfu_tgt.so.3.0 00:04:30.621 SO libspdk_event_keyring.so.1.0 00:04:30.621 LIB libspdk_event_scheduler.a 00:04:30.621 LIB libspdk_event_vmd.a 00:04:30.621 LIB libspdk_event_iobuf.a 00:04:30.621 SO libspdk_event_fsdev.so.1.0 00:04:30.621 SO libspdk_event_sock.so.5.0 00:04:30.621 SO libspdk_event_scheduler.so.4.0 00:04:30.621 SO libspdk_event_vmd.so.6.0 00:04:30.621 SO libspdk_event_iobuf.so.3.0 00:04:30.621 SYMLINK libspdk_event_vhost_blk.so 00:04:30.621 SYMLINK libspdk_event_keyring.so 00:04:30.621 SYMLINK libspdk_event_vfu_tgt.so 00:04:30.621 SYMLINK libspdk_event_fsdev.so 00:04:30.621 SYMLINK libspdk_event_sock.so 00:04:30.621 SYMLINK libspdk_event_scheduler.so 00:04:30.621 SYMLINK libspdk_event_vmd.so 00:04:30.621 SYMLINK libspdk_event_iobuf.so 00:04:30.880 CC module/event/subsystems/accel/accel.o 00:04:31.139 LIB libspdk_event_accel.a 00:04:31.139 SO libspdk_event_accel.so.6.0 00:04:31.139 SYMLINK libspdk_event_accel.so 00:04:31.398 CC module/event/subsystems/bdev/bdev.o 00:04:31.657 LIB libspdk_event_bdev.a 00:04:31.657 SO libspdk_event_bdev.so.6.0 00:04:31.657 SYMLINK libspdk_event_bdev.so 00:04:31.916 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:31.916 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:31.916 CC module/event/subsystems/nbd/nbd.o 00:04:31.916 CC module/event/subsystems/ublk/ublk.o 00:04:31.916 CC module/event/subsystems/scsi/scsi.o 00:04:32.176 LIB libspdk_event_nbd.a 00:04:32.176 LIB libspdk_event_ublk.a 00:04:32.176 LIB libspdk_event_scsi.a 00:04:32.176 SO libspdk_event_nbd.so.6.0 00:04:32.176 SO libspdk_event_ublk.so.3.0 00:04:32.176 SO libspdk_event_scsi.so.6.0 00:04:32.176 SYMLINK libspdk_event_nbd.so 00:04:32.176 SYMLINK libspdk_event_ublk.so 00:04:32.176 SYMLINK libspdk_event_scsi.so 00:04:32.435 LIB libspdk_event_nvmf.a 00:04:32.435 SO libspdk_event_nvmf.so.6.0 00:04:32.435 SYMLINK libspdk_event_nvmf.so 00:04:32.435 CC module/event/subsystems/iscsi/iscsi.o 00:04:32.435 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:32.694 LIB libspdk_event_vhost_scsi.a 00:04:32.694 LIB libspdk_event_iscsi.a 00:04:32.694 SO libspdk_event_vhost_scsi.so.3.0 00:04:32.694 SO libspdk_event_iscsi.so.6.0 00:04:32.694 SYMLINK libspdk_event_vhost_scsi.so 00:04:32.953 SYMLINK libspdk_event_iscsi.so 00:04:32.953 SO libspdk.so.6.0 00:04:32.953 SYMLINK libspdk.so 00:04:33.212 CXX app/trace/trace.o 00:04:33.212 CC app/trace_record/trace_record.o 00:04:33.212 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:33.212 CC app/nvmf_tgt/nvmf_main.o 00:04:33.470 CC app/iscsi_tgt/iscsi_tgt.o 00:04:33.470 CC test/thread/poller_perf/poller_perf.o 00:04:33.470 CC examples/ioat/perf/perf.o 00:04:33.470 CC examples/util/zipf/zipf.o 00:04:33.470 CC test/dma/test_dma/test_dma.o 00:04:33.470 CC test/app/bdev_svc/bdev_svc.o 00:04:33.470 LINK poller_perf 00:04:33.470 LINK interrupt_tgt 00:04:33.470 LINK zipf 00:04:33.470 LINK nvmf_tgt 00:04:33.470 LINK iscsi_tgt 00:04:33.728 LINK spdk_trace_record 00:04:33.728 LINK ioat_perf 00:04:33.728 LINK bdev_svc 00:04:33.728 LINK spdk_trace 00:04:33.728 CC test/app/histogram_perf/histogram_perf.o 00:04:33.728 CC test/app/jsoncat/jsoncat.o 00:04:33.986 CC test/app/stub/stub.o 00:04:33.986 CC examples/ioat/verify/verify.o 00:04:33.986 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:33.986 CC app/spdk_lspci/spdk_lspci.o 00:04:33.986 CC app/spdk_tgt/spdk_tgt.o 00:04:33.986 CC app/spdk_nvme_perf/perf.o 00:04:33.986 LINK jsoncat 00:04:33.986 LINK histogram_perf 00:04:33.986 TEST_HEADER include/spdk/accel.h 00:04:33.986 TEST_HEADER include/spdk/accel_module.h 00:04:33.986 LINK test_dma 00:04:33.986 TEST_HEADER include/spdk/assert.h 00:04:33.986 TEST_HEADER include/spdk/barrier.h 00:04:33.986 TEST_HEADER include/spdk/base64.h 00:04:33.986 TEST_HEADER include/spdk/bdev.h 00:04:33.986 TEST_HEADER include/spdk/bdev_module.h 00:04:33.986 LINK spdk_lspci 00:04:33.986 TEST_HEADER include/spdk/bdev_zone.h 00:04:33.986 TEST_HEADER include/spdk/bit_array.h 00:04:33.986 TEST_HEADER include/spdk/bit_pool.h 00:04:33.986 TEST_HEADER include/spdk/blob_bdev.h 00:04:33.986 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:33.986 TEST_HEADER include/spdk/blobfs.h 00:04:33.986 TEST_HEADER include/spdk/blob.h 00:04:33.986 TEST_HEADER include/spdk/conf.h 00:04:33.986 TEST_HEADER include/spdk/config.h 00:04:33.986 TEST_HEADER include/spdk/cpuset.h 00:04:33.986 LINK stub 00:04:33.986 TEST_HEADER include/spdk/crc16.h 00:04:33.986 TEST_HEADER include/spdk/crc32.h 00:04:33.986 TEST_HEADER include/spdk/crc64.h 00:04:33.986 TEST_HEADER include/spdk/dif.h 00:04:33.986 TEST_HEADER include/spdk/dma.h 00:04:33.986 TEST_HEADER include/spdk/endian.h 00:04:33.986 TEST_HEADER include/spdk/env_dpdk.h 00:04:33.986 TEST_HEADER include/spdk/env.h 00:04:33.986 TEST_HEADER include/spdk/event.h 00:04:33.987 TEST_HEADER include/spdk/fd_group.h 00:04:33.987 TEST_HEADER include/spdk/fd.h 00:04:33.987 TEST_HEADER include/spdk/file.h 00:04:33.987 TEST_HEADER include/spdk/fsdev.h 00:04:33.987 TEST_HEADER include/spdk/fsdev_module.h 00:04:33.987 TEST_HEADER include/spdk/ftl.h 00:04:33.987 TEST_HEADER include/spdk/gpt_spec.h 00:04:33.987 TEST_HEADER include/spdk/hexlify.h 00:04:33.987 TEST_HEADER include/spdk/histogram_data.h 00:04:34.245 TEST_HEADER include/spdk/idxd.h 00:04:34.245 TEST_HEADER include/spdk/idxd_spec.h 00:04:34.245 TEST_HEADER include/spdk/init.h 00:04:34.245 TEST_HEADER include/spdk/ioat.h 00:04:34.245 TEST_HEADER include/spdk/ioat_spec.h 00:04:34.245 TEST_HEADER include/spdk/iscsi_spec.h 00:04:34.245 TEST_HEADER include/spdk/json.h 00:04:34.245 TEST_HEADER include/spdk/jsonrpc.h 00:04:34.245 TEST_HEADER include/spdk/keyring.h 00:04:34.245 TEST_HEADER include/spdk/keyring_module.h 00:04:34.245 TEST_HEADER include/spdk/likely.h 00:04:34.245 TEST_HEADER include/spdk/log.h 00:04:34.245 TEST_HEADER include/spdk/lvol.h 00:04:34.245 TEST_HEADER include/spdk/md5.h 00:04:34.245 TEST_HEADER include/spdk/memory.h 00:04:34.245 TEST_HEADER include/spdk/mmio.h 00:04:34.245 TEST_HEADER include/spdk/nbd.h 00:04:34.245 LINK verify 00:04:34.245 TEST_HEADER include/spdk/net.h 00:04:34.245 TEST_HEADER include/spdk/notify.h 00:04:34.245 TEST_HEADER include/spdk/nvme.h 00:04:34.245 TEST_HEADER include/spdk/nvme_intel.h 00:04:34.245 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:34.245 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:34.245 TEST_HEADER include/spdk/nvme_spec.h 00:04:34.245 TEST_HEADER include/spdk/nvme_zns.h 00:04:34.245 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:34.245 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:34.245 TEST_HEADER include/spdk/nvmf.h 00:04:34.245 TEST_HEADER include/spdk/nvmf_spec.h 00:04:34.245 TEST_HEADER include/spdk/nvmf_transport.h 00:04:34.245 TEST_HEADER include/spdk/opal.h 00:04:34.245 TEST_HEADER include/spdk/opal_spec.h 00:04:34.245 TEST_HEADER include/spdk/pci_ids.h 00:04:34.245 TEST_HEADER include/spdk/pipe.h 00:04:34.245 TEST_HEADER include/spdk/queue.h 00:04:34.245 TEST_HEADER include/spdk/reduce.h 00:04:34.245 TEST_HEADER include/spdk/rpc.h 00:04:34.245 TEST_HEADER include/spdk/scheduler.h 00:04:34.245 TEST_HEADER include/spdk/scsi.h 00:04:34.245 TEST_HEADER include/spdk/scsi_spec.h 00:04:34.245 TEST_HEADER include/spdk/sock.h 00:04:34.245 TEST_HEADER include/spdk/stdinc.h 00:04:34.245 TEST_HEADER include/spdk/string.h 00:04:34.245 TEST_HEADER include/spdk/thread.h 00:04:34.245 TEST_HEADER include/spdk/trace.h 00:04:34.245 TEST_HEADER include/spdk/trace_parser.h 00:04:34.245 TEST_HEADER include/spdk/tree.h 00:04:34.245 TEST_HEADER include/spdk/ublk.h 00:04:34.245 TEST_HEADER include/spdk/util.h 00:04:34.245 TEST_HEADER include/spdk/uuid.h 00:04:34.245 TEST_HEADER include/spdk/version.h 00:04:34.245 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:34.245 LINK spdk_tgt 00:04:34.245 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:34.245 TEST_HEADER include/spdk/vhost.h 00:04:34.245 TEST_HEADER include/spdk/vmd.h 00:04:34.245 TEST_HEADER include/spdk/xor.h 00:04:34.245 TEST_HEADER include/spdk/zipf.h 00:04:34.245 CXX test/cpp_headers/accel.o 00:04:34.245 CC app/spdk_nvme_identify/identify.o 00:04:34.245 CC app/spdk_nvme_discover/discovery_aer.o 00:04:34.245 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:34.245 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:34.503 LINK nvme_fuzz 00:04:34.503 CXX test/cpp_headers/accel_module.o 00:04:34.503 CXX test/cpp_headers/assert.o 00:04:34.503 LINK spdk_nvme_discover 00:04:34.503 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:34.503 CC test/env/mem_callbacks/mem_callbacks.o 00:04:34.503 CC examples/thread/thread/thread_ex.o 00:04:34.503 CXX test/cpp_headers/barrier.o 00:04:34.761 CXX test/cpp_headers/base64.o 00:04:34.761 CC test/env/vtophys/vtophys.o 00:04:34.761 CC examples/sock/hello_world/hello_sock.o 00:04:34.761 LINK thread 00:04:34.761 CC examples/vmd/lsvmd/lsvmd.o 00:04:35.019 LINK vtophys 00:04:35.019 CXX test/cpp_headers/bdev.o 00:04:35.019 LINK lsvmd 00:04:35.019 LINK vhost_fuzz 00:04:35.019 LINK hello_sock 00:04:35.277 CXX test/cpp_headers/bdev_module.o 00:04:35.277 LINK mem_callbacks 00:04:35.277 LINK spdk_nvme_perf 00:04:35.277 CC examples/vmd/led/led.o 00:04:35.277 CC examples/idxd/perf/perf.o 00:04:35.277 LINK spdk_nvme_identify 00:04:35.277 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:35.277 CXX test/cpp_headers/bdev_zone.o 00:04:35.277 CC test/env/memory/memory_ut.o 00:04:35.277 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:35.277 CC examples/accel/perf/accel_perf.o 00:04:35.536 LINK led 00:04:35.536 CC app/spdk_top/spdk_top.o 00:04:35.536 LINK env_dpdk_post_init 00:04:35.536 CC examples/blob/hello_world/hello_blob.o 00:04:35.536 CXX test/cpp_headers/bit_array.o 00:04:35.794 LINK hello_fsdev 00:04:35.794 LINK idxd_perf 00:04:35.794 CC app/vhost/vhost.o 00:04:35.794 CXX test/cpp_headers/bit_pool.o 00:04:35.794 LINK hello_blob 00:04:35.794 CC app/spdk_dd/spdk_dd.o 00:04:36.052 CXX test/cpp_headers/blob_bdev.o 00:04:36.052 LINK vhost 00:04:36.052 CC test/env/pci/pci_ut.o 00:04:36.052 LINK accel_perf 00:04:36.052 CC app/fio/nvme/fio_plugin.o 00:04:36.318 CXX test/cpp_headers/blobfs_bdev.o 00:04:36.318 CC examples/blob/cli/blobcli.o 00:04:36.318 CC app/fio/bdev/fio_plugin.o 00:04:36.318 CXX test/cpp_headers/blobfs.o 00:04:36.576 CC examples/nvme/hello_world/hello_world.o 00:04:36.576 LINK spdk_dd 00:04:36.576 LINK pci_ut 00:04:36.576 LINK iscsi_fuzz 00:04:36.576 CXX test/cpp_headers/blob.o 00:04:36.576 CXX test/cpp_headers/conf.o 00:04:36.576 LINK spdk_top 00:04:36.834 LINK hello_world 00:04:36.834 CXX test/cpp_headers/config.o 00:04:36.834 CXX test/cpp_headers/cpuset.o 00:04:36.834 CXX test/cpp_headers/crc16.o 00:04:36.834 LINK memory_ut 00:04:36.834 LINK spdk_nvme 00:04:36.834 LINK blobcli 00:04:36.834 CXX test/cpp_headers/crc32.o 00:04:37.093 LINK spdk_bdev 00:04:37.093 CC examples/nvme/reconnect/reconnect.o 00:04:37.093 CC examples/bdev/hello_world/hello_bdev.o 00:04:37.093 CC examples/bdev/bdevperf/bdevperf.o 00:04:37.093 CXX test/cpp_headers/crc64.o 00:04:37.093 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:37.093 CC test/event/event_perf/event_perf.o 00:04:37.093 CC examples/nvme/arbitration/arbitration.o 00:04:37.093 CC examples/nvme/hotplug/hotplug.o 00:04:37.093 CC test/nvme/aer/aer.o 00:04:37.093 CXX test/cpp_headers/dif.o 00:04:37.093 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:37.352 LINK hello_bdev 00:04:37.352 LINK event_perf 00:04:37.352 CXX test/cpp_headers/dma.o 00:04:37.352 LINK reconnect 00:04:37.352 LINK hotplug 00:04:37.352 LINK cmb_copy 00:04:37.611 CXX test/cpp_headers/endian.o 00:04:37.611 CC test/event/reactor/reactor.o 00:04:37.611 LINK aer 00:04:37.611 LINK arbitration 00:04:37.611 CXX test/cpp_headers/env_dpdk.o 00:04:37.611 CC test/nvme/reset/reset.o 00:04:37.611 CC examples/nvme/abort/abort.o 00:04:37.611 LINK reactor 00:04:37.611 CXX test/cpp_headers/env.o 00:04:37.611 CC test/nvme/sgl/sgl.o 00:04:37.611 LINK nvme_manage 00:04:37.611 CC test/nvme/e2edp/nvme_dp.o 00:04:37.611 CXX test/cpp_headers/event.o 00:04:37.870 CC test/event/reactor_perf/reactor_perf.o 00:04:37.870 CXX test/cpp_headers/fd_group.o 00:04:37.870 CC test/nvme/overhead/overhead.o 00:04:37.870 CC test/nvme/err_injection/err_injection.o 00:04:37.870 CC test/nvme/startup/startup.o 00:04:38.129 LINK reset 00:04:38.129 LINK sgl 00:04:38.129 LINK bdevperf 00:04:38.129 LINK nvme_dp 00:04:38.129 LINK reactor_perf 00:04:38.129 CXX test/cpp_headers/fd.o 00:04:38.129 LINK abort 00:04:38.129 LINK startup 00:04:38.129 LINK err_injection 00:04:38.129 CXX test/cpp_headers/file.o 00:04:38.388 CC test/nvme/reserve/reserve.o 00:04:38.388 CC test/nvme/simple_copy/simple_copy.o 00:04:38.388 LINK overhead 00:04:38.388 CC test/event/app_repeat/app_repeat.o 00:04:38.388 CXX test/cpp_headers/fsdev.o 00:04:38.388 CC test/nvme/connect_stress/connect_stress.o 00:04:38.388 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:38.389 CC test/event/scheduler/scheduler.o 00:04:38.389 CC test/nvme/boot_partition/boot_partition.o 00:04:38.389 CXX test/cpp_headers/fsdev_module.o 00:04:38.649 LINK reserve 00:04:38.649 LINK app_repeat 00:04:38.649 CC test/nvme/compliance/nvme_compliance.o 00:04:38.649 LINK simple_copy 00:04:38.649 LINK pmr_persistence 00:04:38.649 LINK connect_stress 00:04:38.649 LINK boot_partition 00:04:38.649 CXX test/cpp_headers/ftl.o 00:04:38.649 CC test/nvme/fused_ordering/fused_ordering.o 00:04:38.649 CXX test/cpp_headers/gpt_spec.o 00:04:38.649 LINK scheduler 00:04:38.649 CXX test/cpp_headers/hexlify.o 00:04:38.908 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:38.908 CC test/rpc_client/rpc_client_test.o 00:04:38.908 CXX test/cpp_headers/histogram_data.o 00:04:38.908 CXX test/cpp_headers/idxd.o 00:04:38.908 CC test/nvme/fdp/fdp.o 00:04:38.908 LINK fused_ordering 00:04:38.908 CC test/nvme/cuse/cuse.o 00:04:38.908 LINK nvme_compliance 00:04:38.908 CC examples/nvmf/nvmf/nvmf.o 00:04:39.167 LINK doorbell_aers 00:04:39.167 LINK rpc_client_test 00:04:39.167 CXX test/cpp_headers/idxd_spec.o 00:04:39.167 CC test/accel/dif/dif.o 00:04:39.167 CXX test/cpp_headers/init.o 00:04:39.167 CXX test/cpp_headers/ioat.o 00:04:39.167 CXX test/cpp_headers/ioat_spec.o 00:04:39.167 CC test/blobfs/mkfs/mkfs.o 00:04:39.167 CXX test/cpp_headers/iscsi_spec.o 00:04:39.426 LINK fdp 00:04:39.426 CXX test/cpp_headers/json.o 00:04:39.426 CC test/lvol/esnap/esnap.o 00:04:39.426 LINK nvmf 00:04:39.426 CXX test/cpp_headers/jsonrpc.o 00:04:39.426 CXX test/cpp_headers/keyring.o 00:04:39.426 CXX test/cpp_headers/keyring_module.o 00:04:39.426 CXX test/cpp_headers/likely.o 00:04:39.426 LINK mkfs 00:04:39.426 CXX test/cpp_headers/log.o 00:04:39.685 CXX test/cpp_headers/lvol.o 00:04:39.685 CXX test/cpp_headers/md5.o 00:04:39.685 CXX test/cpp_headers/memory.o 00:04:39.685 CXX test/cpp_headers/mmio.o 00:04:39.685 CXX test/cpp_headers/nbd.o 00:04:39.685 CXX test/cpp_headers/net.o 00:04:39.685 CXX test/cpp_headers/notify.o 00:04:39.685 CXX test/cpp_headers/nvme.o 00:04:39.685 CXX test/cpp_headers/nvme_intel.o 00:04:39.685 CXX test/cpp_headers/nvme_ocssd.o 00:04:39.957 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:39.957 CXX test/cpp_headers/nvme_spec.o 00:04:39.957 CXX test/cpp_headers/nvme_zns.o 00:04:39.957 CXX test/cpp_headers/nvmf_cmd.o 00:04:39.957 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:39.957 CXX test/cpp_headers/nvmf.o 00:04:39.957 CXX test/cpp_headers/nvmf_spec.o 00:04:39.957 CXX test/cpp_headers/nvmf_transport.o 00:04:39.957 CXX test/cpp_headers/opal.o 00:04:39.957 LINK dif 00:04:40.227 CXX test/cpp_headers/opal_spec.o 00:04:40.227 CXX test/cpp_headers/pci_ids.o 00:04:40.227 CXX test/cpp_headers/pipe.o 00:04:40.227 CXX test/cpp_headers/queue.o 00:04:40.227 CXX test/cpp_headers/reduce.o 00:04:40.227 CXX test/cpp_headers/rpc.o 00:04:40.227 CXX test/cpp_headers/scheduler.o 00:04:40.227 CXX test/cpp_headers/scsi.o 00:04:40.227 CXX test/cpp_headers/scsi_spec.o 00:04:40.227 CXX test/cpp_headers/sock.o 00:04:40.227 CXX test/cpp_headers/stdinc.o 00:04:40.486 CXX test/cpp_headers/string.o 00:04:40.486 CXX test/cpp_headers/thread.o 00:04:40.486 CXX test/cpp_headers/trace.o 00:04:40.486 CXX test/cpp_headers/trace_parser.o 00:04:40.486 CXX test/cpp_headers/tree.o 00:04:40.486 CXX test/cpp_headers/ublk.o 00:04:40.486 CXX test/cpp_headers/util.o 00:04:40.486 CXX test/cpp_headers/uuid.o 00:04:40.486 CC test/bdev/bdevio/bdevio.o 00:04:40.486 CXX test/cpp_headers/version.o 00:04:40.486 CXX test/cpp_headers/vfio_user_pci.o 00:04:40.486 LINK cuse 00:04:40.745 CXX test/cpp_headers/vfio_user_spec.o 00:04:40.745 CXX test/cpp_headers/vhost.o 00:04:40.745 CXX test/cpp_headers/vmd.o 00:04:40.745 CXX test/cpp_headers/xor.o 00:04:40.745 CXX test/cpp_headers/zipf.o 00:04:41.004 LINK bdevio 00:04:46.277 LINK esnap 00:04:46.277 ************************************ 00:04:46.277 END TEST make 00:04:46.277 ************************************ 00:04:46.277 00:04:46.277 real 1m41.851s 00:04:46.277 user 9m27.154s 00:04:46.277 sys 1m37.930s 00:04:46.277 11:07:52 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:46.277 11:07:52 make -- common/autotest_common.sh@10 -- $ set +x 00:04:46.277 11:07:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:46.277 11:07:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:46.278 11:07:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:46.278 11:07:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.278 11:07:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:46.278 11:07:52 -- pm/common@44 -- $ pid=5292 00:04:46.278 11:07:52 -- pm/common@50 -- $ kill -TERM 5292 00:04:46.278 11:07:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.278 11:07:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:46.278 11:07:52 -- pm/common@44 -- $ pid=5294 00:04:46.278 11:07:52 -- pm/common@50 -- $ kill -TERM 5294 00:04:46.278 11:07:52 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:46.278 11:07:52 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:46.278 11:07:52 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:46.278 11:07:52 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:46.278 11:07:52 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:46.278 11:07:53 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:46.278 11:07:53 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.278 11:07:53 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.278 11:07:53 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.278 11:07:53 -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.278 11:07:53 -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.278 11:07:53 -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.278 11:07:53 -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.278 11:07:53 -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.278 11:07:53 -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.278 11:07:53 -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.278 11:07:53 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.278 11:07:53 -- scripts/common.sh@344 -- # case "$op" in 00:04:46.278 11:07:53 -- scripts/common.sh@345 -- # : 1 00:04:46.278 11:07:53 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.278 11:07:53 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.278 11:07:53 -- scripts/common.sh@365 -- # decimal 1 00:04:46.278 11:07:53 -- scripts/common.sh@353 -- # local d=1 00:04:46.278 11:07:53 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.278 11:07:53 -- scripts/common.sh@355 -- # echo 1 00:04:46.278 11:07:53 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.278 11:07:53 -- scripts/common.sh@366 -- # decimal 2 00:04:46.278 11:07:53 -- scripts/common.sh@353 -- # local d=2 00:04:46.278 11:07:53 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.278 11:07:53 -- scripts/common.sh@355 -- # echo 2 00:04:46.278 11:07:53 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.278 11:07:53 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.278 11:07:53 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.278 11:07:53 -- scripts/common.sh@368 -- # return 0 00:04:46.278 11:07:53 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.278 11:07:53 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:46.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.278 --rc genhtml_branch_coverage=1 00:04:46.278 --rc genhtml_function_coverage=1 00:04:46.278 --rc genhtml_legend=1 00:04:46.278 --rc geninfo_all_blocks=1 00:04:46.278 --rc geninfo_unexecuted_blocks=1 00:04:46.278 00:04:46.278 ' 00:04:46.278 11:07:53 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:46.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.278 --rc genhtml_branch_coverage=1 00:04:46.278 --rc genhtml_function_coverage=1 00:04:46.278 --rc genhtml_legend=1 00:04:46.278 --rc geninfo_all_blocks=1 00:04:46.278 --rc geninfo_unexecuted_blocks=1 00:04:46.278 00:04:46.278 ' 00:04:46.278 11:07:53 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:46.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.278 --rc genhtml_branch_coverage=1 00:04:46.278 --rc genhtml_function_coverage=1 00:04:46.278 --rc genhtml_legend=1 00:04:46.278 --rc geninfo_all_blocks=1 00:04:46.278 --rc geninfo_unexecuted_blocks=1 00:04:46.278 00:04:46.278 ' 00:04:46.278 11:07:53 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:46.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.278 --rc genhtml_branch_coverage=1 00:04:46.278 --rc genhtml_function_coverage=1 00:04:46.278 --rc genhtml_legend=1 00:04:46.278 --rc geninfo_all_blocks=1 00:04:46.278 --rc geninfo_unexecuted_blocks=1 00:04:46.278 00:04:46.278 ' 00:04:46.278 11:07:53 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:46.278 11:07:53 -- nvmf/common.sh@7 -- # uname -s 00:04:46.278 11:07:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.278 11:07:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.278 11:07:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.278 11:07:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.278 11:07:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.278 11:07:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.278 11:07:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.278 11:07:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.278 11:07:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.278 11:07:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.278 11:07:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:04:46.278 11:07:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:04:46.278 11:07:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.278 11:07:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.278 11:07:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:46.278 11:07:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:46.278 11:07:53 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:46.278 11:07:53 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:46.278 11:07:53 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.278 11:07:53 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.278 11:07:53 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.278 11:07:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.278 11:07:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.278 11:07:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.278 11:07:53 -- paths/export.sh@5 -- # export PATH 00:04:46.278 11:07:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.278 11:07:53 -- nvmf/common.sh@51 -- # : 0 00:04:46.278 11:07:53 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:46.278 11:07:53 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:46.278 11:07:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:46.278 11:07:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.278 11:07:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.278 11:07:53 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:46.278 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:46.278 11:07:53 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:46.278 11:07:53 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:46.278 11:07:53 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:46.278 11:07:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:46.278 11:07:53 -- spdk/autotest.sh@32 -- # uname -s 00:04:46.538 11:07:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:46.538 11:07:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:46.538 11:07:53 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:46.538 11:07:53 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:46.538 11:07:53 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:46.538 11:07:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:46.538 11:07:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:46.538 11:07:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:46.538 11:07:53 -- spdk/autotest.sh@48 -- # udevadm_pid=55065 00:04:46.538 11:07:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:46.538 11:07:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:46.538 11:07:53 -- pm/common@17 -- # local monitor 00:04:46.538 11:07:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.538 11:07:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.538 11:07:53 -- pm/common@21 -- # date +%s 00:04:46.538 11:07:53 -- pm/common@21 -- # date +%s 00:04:46.538 11:07:53 -- pm/common@25 -- # sleep 1 00:04:46.538 11:07:53 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733828873 00:04:46.538 11:07:53 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733828873 00:04:46.538 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733828873_collect-vmstat.pm.log 00:04:46.538 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733828873_collect-cpu-load.pm.log 00:04:47.474 11:07:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:47.474 11:07:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:47.474 11:07:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:47.474 11:07:54 -- common/autotest_common.sh@10 -- # set +x 00:04:47.474 11:07:54 -- spdk/autotest.sh@59 -- # create_test_list 00:04:47.474 11:07:54 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:47.474 11:07:54 -- common/autotest_common.sh@10 -- # set +x 00:04:47.474 11:07:54 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:47.474 11:07:54 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:47.474 11:07:54 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:47.474 11:07:54 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:47.474 11:07:54 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:47.474 11:07:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:47.474 11:07:54 -- common/autotest_common.sh@1457 -- # uname 00:04:47.474 11:07:54 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:47.474 11:07:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:47.474 11:07:54 -- common/autotest_common.sh@1477 -- # uname 00:04:47.474 11:07:54 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:47.474 11:07:54 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:47.474 11:07:54 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:47.733 lcov: LCOV version 1.15 00:04:47.733 11:07:54 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:02.609 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:02.609 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:20.699 11:08:25 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:20.699 11:08:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.699 11:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:20.699 11:08:25 -- spdk/autotest.sh@78 -- # rm -f 00:05:20.699 11:08:25 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.699 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:20.699 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:20.699 11:08:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:20.699 11:08:26 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:20.699 11:08:26 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:20.699 11:08:26 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:20.699 11:08:26 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:20.699 11:08:26 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:20.699 11:08:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:20.699 11:08:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:20.699 11:08:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:20.699 11:08:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:20.699 11:08:26 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:20.699 11:08:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:20.699 11:08:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:20.699 11:08:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:20.699 11:08:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:20.699 11:08:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:20.699 11:08:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:20.699 11:08:26 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:20.699 11:08:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:20.699 11:08:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:20.699 11:08:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:20.699 11:08:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:20.699 11:08:26 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:20.699 11:08:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:20.699 11:08:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:20.699 11:08:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:20.699 11:08:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:05:20.699 11:08:26 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:20.699 11:08:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:20.699 11:08:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:20.699 11:08:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:20.699 11:08:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:20.699 11:08:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:20.699 11:08:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:20.699 11:08:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:20.699 11:08:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:20.699 No valid GPT data, bailing 00:05:20.699 11:08:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:20.699 11:08:26 -- scripts/common.sh@394 -- # pt= 00:05:20.699 11:08:26 -- scripts/common.sh@395 -- # return 1 00:05:20.699 11:08:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:20.699 1+0 records in 00:05:20.699 1+0 records out 00:05:20.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415862 s, 252 MB/s 00:05:20.699 11:08:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:20.699 11:08:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:20.699 11:08:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:20.699 11:08:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:20.699 11:08:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:20.699 No valid GPT data, bailing 00:05:20.699 11:08:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:20.699 11:08:26 -- scripts/common.sh@394 -- # pt= 00:05:20.699 11:08:26 -- scripts/common.sh@395 -- # return 1 00:05:20.699 11:08:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:20.699 1+0 records in 00:05:20.699 1+0 records out 00:05:20.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432736 s, 242 MB/s 00:05:20.699 11:08:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:20.699 11:08:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:20.699 11:08:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:20.699 11:08:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:20.699 11:08:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:20.699 No valid GPT data, bailing 00:05:20.699 11:08:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:20.699 11:08:26 -- scripts/common.sh@394 -- # pt= 00:05:20.699 11:08:26 -- scripts/common.sh@395 -- # return 1 00:05:20.699 11:08:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:20.699 1+0 records in 00:05:20.699 1+0 records out 00:05:20.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044665 s, 235 MB/s 00:05:20.699 11:08:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:20.699 11:08:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:20.699 11:08:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:20.699 11:08:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:20.699 11:08:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:20.699 No valid GPT data, bailing 00:05:20.699 11:08:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:20.699 11:08:26 -- scripts/common.sh@394 -- # pt= 00:05:20.699 11:08:26 -- scripts/common.sh@395 -- # return 1 00:05:20.699 11:08:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:20.699 1+0 records in 00:05:20.699 1+0 records out 00:05:20.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412536 s, 254 MB/s 00:05:20.699 11:08:26 -- spdk/autotest.sh@105 -- # sync 00:05:20.699 11:08:26 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:20.699 11:08:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:20.699 11:08:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:21.637 11:08:28 -- spdk/autotest.sh@111 -- # uname -s 00:05:21.637 11:08:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:21.637 11:08:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:21.637 11:08:28 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:22.571 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.571 Hugepages 00:05:22.571 node hugesize free / total 00:05:22.571 node0 1048576kB 0 / 0 00:05:22.571 node0 2048kB 0 / 0 00:05:22.571 00:05:22.571 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:22.571 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:22.571 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:22.571 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:22.571 11:08:29 -- spdk/autotest.sh@117 -- # uname -s 00:05:22.571 11:08:29 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:22.571 11:08:29 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:22.571 11:08:29 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.505 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.505 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.505 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.505 11:08:30 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:24.441 11:08:31 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:24.441 11:08:31 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:24.441 11:08:31 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:24.441 11:08:31 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:24.441 11:08:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:24.441 11:08:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:24.441 11:08:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.441 11:08:31 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:24.441 11:08:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:24.719 11:08:31 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:24.719 11:08:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:24.719 11:08:31 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.978 Waiting for block devices as requested 00:05:24.978 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:24.978 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.236 11:08:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.236 11:08:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:25.236 11:08:31 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:25.236 11:08:31 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:25.236 11:08:31 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:25.236 11:08:31 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:25.236 11:08:31 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:25.236 11:08:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:25.236 11:08:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:25.236 11:08:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:25.236 11:08:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:25.236 11:08:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.236 11:08:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.236 11:08:31 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:25.236 11:08:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.236 11:08:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.237 11:08:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:25.237 11:08:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.237 11:08:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.237 11:08:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.237 11:08:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.237 11:08:31 -- common/autotest_common.sh@1543 -- # continue 00:05:25.237 11:08:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.237 11:08:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:25.237 11:08:31 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:25.237 11:08:31 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:25.237 11:08:31 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:25.237 11:08:31 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:25.237 11:08:31 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:25.237 11:08:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:25.237 11:08:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:25.237 11:08:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:25.237 11:08:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:25.237 11:08:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.237 11:08:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.237 11:08:31 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:25.237 11:08:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.237 11:08:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.237 11:08:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:25.237 11:08:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.237 11:08:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.237 11:08:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.237 11:08:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.237 11:08:31 -- common/autotest_common.sh@1543 -- # continue 00:05:25.237 11:08:31 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:25.237 11:08:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.237 11:08:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.237 11:08:31 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:25.237 11:08:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.237 11:08:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.237 11:08:31 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.803 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.062 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.062 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.062 11:08:32 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:26.062 11:08:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.062 11:08:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.062 11:08:32 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:26.062 11:08:32 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:26.062 11:08:32 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:26.062 11:08:32 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:26.062 11:08:32 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:26.062 11:08:32 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:26.062 11:08:32 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:26.062 11:08:32 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:26.062 11:08:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:26.062 11:08:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:26.062 11:08:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.062 11:08:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:26.062 11:08:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:26.320 11:08:32 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:26.320 11:08:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:26.320 11:08:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:26.320 11:08:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:26.320 11:08:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:26.320 11:08:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:26.320 11:08:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:26.320 11:08:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:26.320 11:08:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:26.320 11:08:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:26.320 11:08:32 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:26.320 11:08:32 -- common/autotest_common.sh@1572 -- # return 0 00:05:26.320 11:08:32 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:26.320 11:08:32 -- common/autotest_common.sh@1580 -- # return 0 00:05:26.320 11:08:32 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:26.320 11:08:32 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:26.320 11:08:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:26.320 11:08:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:26.320 11:08:32 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:26.320 11:08:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.320 11:08:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.320 11:08:32 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:26.320 11:08:32 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:26.320 11:08:32 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:26.320 11:08:32 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:26.320 11:08:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.320 11:08:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.320 11:08:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.320 ************************************ 00:05:26.320 START TEST env 00:05:26.320 ************************************ 00:05:26.320 11:08:32 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:26.320 * Looking for test storage... 00:05:26.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:26.320 11:08:33 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:26.320 11:08:33 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:26.320 11:08:33 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:26.320 11:08:33 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:26.320 11:08:33 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.320 11:08:33 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.320 11:08:33 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.320 11:08:33 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.320 11:08:33 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.320 11:08:33 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.320 11:08:33 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.320 11:08:33 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.320 11:08:33 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.320 11:08:33 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.320 11:08:33 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.320 11:08:33 env -- scripts/common.sh@344 -- # case "$op" in 00:05:26.320 11:08:33 env -- scripts/common.sh@345 -- # : 1 00:05:26.320 11:08:33 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.320 11:08:33 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.320 11:08:33 env -- scripts/common.sh@365 -- # decimal 1 00:05:26.320 11:08:33 env -- scripts/common.sh@353 -- # local d=1 00:05:26.320 11:08:33 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.320 11:08:33 env -- scripts/common.sh@355 -- # echo 1 00:05:26.320 11:08:33 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.320 11:08:33 env -- scripts/common.sh@366 -- # decimal 2 00:05:26.320 11:08:33 env -- scripts/common.sh@353 -- # local d=2 00:05:26.320 11:08:33 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.320 11:08:33 env -- scripts/common.sh@355 -- # echo 2 00:05:26.320 11:08:33 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.320 11:08:33 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.320 11:08:33 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.320 11:08:33 env -- scripts/common.sh@368 -- # return 0 00:05:26.320 11:08:33 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.320 11:08:33 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:26.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.320 --rc genhtml_branch_coverage=1 00:05:26.320 --rc genhtml_function_coverage=1 00:05:26.320 --rc genhtml_legend=1 00:05:26.320 --rc geninfo_all_blocks=1 00:05:26.320 --rc geninfo_unexecuted_blocks=1 00:05:26.320 00:05:26.320 ' 00:05:26.320 11:08:33 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:26.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.320 --rc genhtml_branch_coverage=1 00:05:26.320 --rc genhtml_function_coverage=1 00:05:26.320 --rc genhtml_legend=1 00:05:26.320 --rc geninfo_all_blocks=1 00:05:26.321 --rc geninfo_unexecuted_blocks=1 00:05:26.321 00:05:26.321 ' 00:05:26.321 11:08:33 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:26.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.321 --rc genhtml_branch_coverage=1 00:05:26.321 --rc genhtml_function_coverage=1 00:05:26.321 --rc genhtml_legend=1 00:05:26.321 --rc geninfo_all_blocks=1 00:05:26.321 --rc geninfo_unexecuted_blocks=1 00:05:26.321 00:05:26.321 ' 00:05:26.321 11:08:33 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:26.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.321 --rc genhtml_branch_coverage=1 00:05:26.321 --rc genhtml_function_coverage=1 00:05:26.321 --rc genhtml_legend=1 00:05:26.321 --rc geninfo_all_blocks=1 00:05:26.321 --rc geninfo_unexecuted_blocks=1 00:05:26.321 00:05:26.321 ' 00:05:26.321 11:08:33 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:26.321 11:08:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.321 11:08:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.321 11:08:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.579 ************************************ 00:05:26.579 START TEST env_memory 00:05:26.579 ************************************ 00:05:26.579 11:08:33 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:26.579 00:05:26.579 00:05:26.579 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.579 http://cunit.sourceforge.net/ 00:05:26.579 00:05:26.579 00:05:26.579 Suite: memory 00:05:26.579 Test: alloc and free memory map ...[2024-12-10 11:08:33.222263] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:26.579 passed 00:05:26.579 Test: mem map translation ...[2024-12-10 11:08:33.282816] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:26.579 [2024-12-10 11:08:33.282889] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:26.579 [2024-12-10 11:08:33.282987] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:26.579 [2024-12-10 11:08:33.283018] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:26.579 passed 00:05:26.579 Test: mem map registration ...[2024-12-10 11:08:33.381245] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:26.579 [2024-12-10 11:08:33.381328] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:26.838 passed 00:05:26.838 Test: mem map adjacent registrations ...passed 00:05:26.838 00:05:26.838 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.838 suites 1 1 n/a 0 0 00:05:26.838 tests 4 4 4 0 0 00:05:26.838 asserts 152 152 152 0 n/a 00:05:26.838 00:05:26.838 Elapsed time = 0.344 seconds 00:05:26.838 00:05:26.838 real 0m0.384s 00:05:26.838 user 0m0.352s 00:05:26.838 sys 0m0.025s 00:05:26.838 11:08:33 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.838 11:08:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:26.838 ************************************ 00:05:26.838 END TEST env_memory 00:05:26.838 ************************************ 00:05:26.838 11:08:33 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:26.838 11:08:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.838 11:08:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.838 11:08:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.838 ************************************ 00:05:26.838 START TEST env_vtophys 00:05:26.838 ************************************ 00:05:26.838 11:08:33 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:26.838 EAL: lib.eal log level changed from notice to debug 00:05:26.838 EAL: Detected lcore 0 as core 0 on socket 0 00:05:26.838 EAL: Detected lcore 1 as core 0 on socket 0 00:05:26.838 EAL: Detected lcore 2 as core 0 on socket 0 00:05:26.838 EAL: Detected lcore 3 as core 0 on socket 0 00:05:26.838 EAL: Detected lcore 4 as core 0 on socket 0 00:05:26.838 EAL: Detected lcore 5 as core 0 on socket 0 00:05:26.838 EAL: Detected lcore 6 as core 0 on socket 0 00:05:26.838 EAL: Detected lcore 7 as core 0 on socket 0 00:05:26.838 EAL: Detected lcore 8 as core 0 on socket 0 00:05:26.838 EAL: Detected lcore 9 as core 0 on socket 0 00:05:26.838 EAL: Maximum logical cores by configuration: 128 00:05:26.838 EAL: Detected CPU lcores: 10 00:05:26.838 EAL: Detected NUMA nodes: 1 00:05:26.838 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:26.838 EAL: Detected shared linkage of DPDK 00:05:27.097 EAL: No shared files mode enabled, IPC will be disabled 00:05:27.097 EAL: Selected IOVA mode 'PA' 00:05:27.097 EAL: Probing VFIO support... 00:05:27.097 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:27.097 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:27.097 EAL: Ask a virtual area of 0x2e000 bytes 00:05:27.097 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:27.097 EAL: Setting up physically contiguous memory... 00:05:27.097 EAL: Setting maximum number of open files to 524288 00:05:27.097 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:27.097 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:27.097 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.097 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:27.097 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.097 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.097 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:27.097 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:27.097 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.097 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:27.097 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.097 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.097 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:27.097 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:27.097 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.097 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:27.097 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.097 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.097 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:27.097 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:27.097 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.097 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:27.097 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.097 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.097 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:27.097 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:27.097 EAL: Hugepages will be freed exactly as allocated. 00:05:27.097 EAL: No shared files mode enabled, IPC is disabled 00:05:27.097 EAL: No shared files mode enabled, IPC is disabled 00:05:27.097 EAL: TSC frequency is ~2200000 KHz 00:05:27.097 EAL: Main lcore 0 is ready (tid=7f614df08a40;cpuset=[0]) 00:05:27.097 EAL: Trying to obtain current memory policy. 00:05:27.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.097 EAL: Restoring previous memory policy: 0 00:05:27.097 EAL: request: mp_malloc_sync 00:05:27.097 EAL: No shared files mode enabled, IPC is disabled 00:05:27.097 EAL: Heap on socket 0 was expanded by 2MB 00:05:27.097 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:27.097 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:27.097 EAL: Mem event callback 'spdk:(nil)' registered 00:05:27.097 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:27.097 00:05:27.097 00:05:27.097 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.097 http://cunit.sourceforge.net/ 00:05:27.097 00:05:27.097 00:05:27.097 Suite: components_suite 00:05:27.663 Test: vtophys_malloc_test ...passed 00:05:27.663 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:27.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.663 EAL: Restoring previous memory policy: 4 00:05:27.663 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.663 EAL: request: mp_malloc_sync 00:05:27.663 EAL: No shared files mode enabled, IPC is disabled 00:05:27.663 EAL: Heap on socket 0 was expanded by 4MB 00:05:27.663 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.663 EAL: request: mp_malloc_sync 00:05:27.663 EAL: No shared files mode enabled, IPC is disabled 00:05:27.663 EAL: Heap on socket 0 was shrunk by 4MB 00:05:27.663 EAL: Trying to obtain current memory policy. 00:05:27.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.663 EAL: Restoring previous memory policy: 4 00:05:27.663 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.663 EAL: request: mp_malloc_sync 00:05:27.663 EAL: No shared files mode enabled, IPC is disabled 00:05:27.663 EAL: Heap on socket 0 was expanded by 6MB 00:05:27.663 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.663 EAL: request: mp_malloc_sync 00:05:27.663 EAL: No shared files mode enabled, IPC is disabled 00:05:27.663 EAL: Heap on socket 0 was shrunk by 6MB 00:05:27.663 EAL: Trying to obtain current memory policy. 00:05:27.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.663 EAL: Restoring previous memory policy: 4 00:05:27.663 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.663 EAL: request: mp_malloc_sync 00:05:27.663 EAL: No shared files mode enabled, IPC is disabled 00:05:27.663 EAL: Heap on socket 0 was expanded by 10MB 00:05:27.663 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.663 EAL: request: mp_malloc_sync 00:05:27.663 EAL: No shared files mode enabled, IPC is disabled 00:05:27.663 EAL: Heap on socket 0 was shrunk by 10MB 00:05:27.663 EAL: Trying to obtain current memory policy. 00:05:27.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.663 EAL: Restoring previous memory policy: 4 00:05:27.663 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.663 EAL: request: mp_malloc_sync 00:05:27.663 EAL: No shared files mode enabled, IPC is disabled 00:05:27.663 EAL: Heap on socket 0 was expanded by 18MB 00:05:27.663 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.663 EAL: request: mp_malloc_sync 00:05:27.663 EAL: No shared files mode enabled, IPC is disabled 00:05:27.664 EAL: Heap on socket 0 was shrunk by 18MB 00:05:27.664 EAL: Trying to obtain current memory policy. 00:05:27.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.664 EAL: Restoring previous memory policy: 4 00:05:27.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.664 EAL: request: mp_malloc_sync 00:05:27.664 EAL: No shared files mode enabled, IPC is disabled 00:05:27.664 EAL: Heap on socket 0 was expanded by 34MB 00:05:27.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.664 EAL: request: mp_malloc_sync 00:05:27.664 EAL: No shared files mode enabled, IPC is disabled 00:05:27.664 EAL: Heap on socket 0 was shrunk by 34MB 00:05:27.664 EAL: Trying to obtain current memory policy. 00:05:27.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.664 EAL: Restoring previous memory policy: 4 00:05:27.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.664 EAL: request: mp_malloc_sync 00:05:27.664 EAL: No shared files mode enabled, IPC is disabled 00:05:27.664 EAL: Heap on socket 0 was expanded by 66MB 00:05:27.922 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.922 EAL: request: mp_malloc_sync 00:05:27.922 EAL: No shared files mode enabled, IPC is disabled 00:05:27.922 EAL: Heap on socket 0 was shrunk by 66MB 00:05:27.922 EAL: Trying to obtain current memory policy. 00:05:27.922 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.922 EAL: Restoring previous memory policy: 4 00:05:27.922 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.922 EAL: request: mp_malloc_sync 00:05:27.922 EAL: No shared files mode enabled, IPC is disabled 00:05:27.922 EAL: Heap on socket 0 was expanded by 130MB 00:05:28.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.180 EAL: request: mp_malloc_sync 00:05:28.180 EAL: No shared files mode enabled, IPC is disabled 00:05:28.180 EAL: Heap on socket 0 was shrunk by 130MB 00:05:28.180 EAL: Trying to obtain current memory policy. 00:05:28.180 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.437 EAL: Restoring previous memory policy: 4 00:05:28.437 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.437 EAL: request: mp_malloc_sync 00:05:28.437 EAL: No shared files mode enabled, IPC is disabled 00:05:28.437 EAL: Heap on socket 0 was expanded by 258MB 00:05:28.694 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.694 EAL: request: mp_malloc_sync 00:05:28.694 EAL: No shared files mode enabled, IPC is disabled 00:05:28.694 EAL: Heap on socket 0 was shrunk by 258MB 00:05:28.953 EAL: Trying to obtain current memory policy. 00:05:28.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.212 EAL: Restoring previous memory policy: 4 00:05:29.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.212 EAL: request: mp_malloc_sync 00:05:29.212 EAL: No shared files mode enabled, IPC is disabled 00:05:29.212 EAL: Heap on socket 0 was expanded by 514MB 00:05:29.779 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.038 EAL: request: mp_malloc_sync 00:05:30.038 EAL: No shared files mode enabled, IPC is disabled 00:05:30.038 EAL: Heap on socket 0 was shrunk by 514MB 00:05:30.606 EAL: Trying to obtain current memory policy. 00:05:30.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.866 EAL: Restoring previous memory policy: 4 00:05:30.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.866 EAL: request: mp_malloc_sync 00:05:30.866 EAL: No shared files mode enabled, IPC is disabled 00:05:30.866 EAL: Heap on socket 0 was expanded by 1026MB 00:05:32.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.272 EAL: request: mp_malloc_sync 00:05:32.272 EAL: No shared files mode enabled, IPC is disabled 00:05:32.272 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:33.648 passed 00:05:33.648 00:05:33.648 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.648 suites 1 1 n/a 0 0 00:05:33.648 tests 2 2 2 0 0 00:05:33.648 asserts 5677 5677 5677 0 n/a 00:05:33.648 00:05:33.648 Elapsed time = 6.453 seconds 00:05:33.648 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.648 EAL: request: mp_malloc_sync 00:05:33.648 EAL: No shared files mode enabled, IPC is disabled 00:05:33.648 EAL: Heap on socket 0 was shrunk by 2MB 00:05:33.648 EAL: No shared files mode enabled, IPC is disabled 00:05:33.648 EAL: No shared files mode enabled, IPC is disabled 00:05:33.648 EAL: No shared files mode enabled, IPC is disabled 00:05:33.648 00:05:33.648 real 0m6.780s 00:05:33.648 user 0m5.897s 00:05:33.648 sys 0m0.729s 00:05:33.648 11:08:40 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.648 11:08:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:33.648 ************************************ 00:05:33.648 END TEST env_vtophys 00:05:33.648 ************************************ 00:05:33.649 11:08:40 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:33.649 11:08:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.649 11:08:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.649 11:08:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.649 ************************************ 00:05:33.649 START TEST env_pci 00:05:33.649 ************************************ 00:05:33.649 11:08:40 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:33.649 00:05:33.649 00:05:33.649 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.649 http://cunit.sourceforge.net/ 00:05:33.649 00:05:33.649 00:05:33.649 Suite: pci 00:05:33.649 Test: pci_hook ...[2024-12-10 11:08:40.455804] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57356 has claimed it 00:05:33.908 EAL: Cannot find device (10000:00:01.0) 00:05:33.908 EAL: Failed to attach device on primary process 00:05:33.908 passed 00:05:33.908 00:05:33.908 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.908 suites 1 1 n/a 0 0 00:05:33.908 tests 1 1 1 0 0 00:05:33.908 asserts 25 25 25 0 n/a 00:05:33.908 00:05:33.908 Elapsed time = 0.008 seconds 00:05:33.908 00:05:33.908 real 0m0.079s 00:05:33.908 user 0m0.033s 00:05:33.908 sys 0m0.045s 00:05:33.908 11:08:40 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.908 11:08:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:33.908 ************************************ 00:05:33.908 END TEST env_pci 00:05:33.908 ************************************ 00:05:33.908 11:08:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:33.908 11:08:40 env -- env/env.sh@15 -- # uname 00:05:33.908 11:08:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:33.908 11:08:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:33.908 11:08:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:33.908 11:08:40 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:33.908 11:08:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.908 11:08:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.908 ************************************ 00:05:33.908 START TEST env_dpdk_post_init 00:05:33.908 ************************************ 00:05:33.908 11:08:40 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:33.908 EAL: Detected CPU lcores: 10 00:05:33.908 EAL: Detected NUMA nodes: 1 00:05:33.908 EAL: Detected shared linkage of DPDK 00:05:33.908 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:33.908 EAL: Selected IOVA mode 'PA' 00:05:34.167 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:34.167 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:34.167 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:34.167 Starting DPDK initialization... 00:05:34.167 Starting SPDK post initialization... 00:05:34.167 SPDK NVMe probe 00:05:34.167 Attaching to 0000:00:10.0 00:05:34.167 Attaching to 0000:00:11.0 00:05:34.167 Attached to 0000:00:10.0 00:05:34.167 Attached to 0000:00:11.0 00:05:34.167 Cleaning up... 00:05:34.167 00:05:34.167 real 0m0.304s 00:05:34.167 user 0m0.116s 00:05:34.167 sys 0m0.087s 00:05:34.167 11:08:40 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.167 11:08:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:34.167 ************************************ 00:05:34.167 END TEST env_dpdk_post_init 00:05:34.167 ************************************ 00:05:34.167 11:08:40 env -- env/env.sh@26 -- # uname 00:05:34.167 11:08:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:34.167 11:08:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:34.167 11:08:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.167 11:08:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.167 11:08:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.167 ************************************ 00:05:34.167 START TEST env_mem_callbacks 00:05:34.167 ************************************ 00:05:34.167 11:08:40 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:34.167 EAL: Detected CPU lcores: 10 00:05:34.167 EAL: Detected NUMA nodes: 1 00:05:34.167 EAL: Detected shared linkage of DPDK 00:05:34.167 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:34.167 EAL: Selected IOVA mode 'PA' 00:05:34.426 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:34.426 00:05:34.426 00:05:34.426 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.426 http://cunit.sourceforge.net/ 00:05:34.426 00:05:34.426 00:05:34.426 Suite: memory 00:05:34.426 Test: test ... 00:05:34.426 register 0x200000200000 2097152 00:05:34.426 malloc 3145728 00:05:34.426 register 0x200000400000 4194304 00:05:34.426 buf 0x2000004fffc0 len 3145728 PASSED 00:05:34.426 malloc 64 00:05:34.426 buf 0x2000004ffec0 len 64 PASSED 00:05:34.426 malloc 4194304 00:05:34.426 register 0x200000800000 6291456 00:05:34.426 buf 0x2000009fffc0 len 4194304 PASSED 00:05:34.426 free 0x2000004fffc0 3145728 00:05:34.426 free 0x2000004ffec0 64 00:05:34.426 unregister 0x200000400000 4194304 PASSED 00:05:34.426 free 0x2000009fffc0 4194304 00:05:34.426 unregister 0x200000800000 6291456 PASSED 00:05:34.426 malloc 8388608 00:05:34.426 register 0x200000400000 10485760 00:05:34.426 buf 0x2000005fffc0 len 8388608 PASSED 00:05:34.426 free 0x2000005fffc0 8388608 00:05:34.426 unregister 0x200000400000 10485760 PASSED 00:05:34.426 passed 00:05:34.426 00:05:34.426 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.426 suites 1 1 n/a 0 0 00:05:34.426 tests 1 1 1 0 0 00:05:34.426 asserts 15 15 15 0 n/a 00:05:34.426 00:05:34.426 Elapsed time = 0.073 seconds 00:05:34.426 00:05:34.426 real 0m0.272s 00:05:34.426 user 0m0.114s 00:05:34.426 sys 0m0.057s 00:05:34.426 11:08:41 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.426 11:08:41 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:34.426 ************************************ 00:05:34.426 END TEST env_mem_callbacks 00:05:34.426 ************************************ 00:05:34.426 00:05:34.426 real 0m8.282s 00:05:34.426 user 0m6.718s 00:05:34.426 sys 0m1.188s 00:05:34.426 11:08:41 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.426 ************************************ 00:05:34.426 11:08:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.426 END TEST env 00:05:34.426 ************************************ 00:05:34.685 11:08:41 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:34.685 11:08:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.685 11:08:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.685 11:08:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.685 ************************************ 00:05:34.685 START TEST rpc 00:05:34.685 ************************************ 00:05:34.685 11:08:41 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:34.685 * Looking for test storage... 00:05:34.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.685 11:08:41 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:34.686 11:08:41 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:34.686 11:08:41 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:34.686 11:08:41 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:34.686 11:08:41 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.686 11:08:41 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.686 11:08:41 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.686 11:08:41 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.686 11:08:41 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.686 11:08:41 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.686 11:08:41 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.686 11:08:41 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.686 11:08:41 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.686 11:08:41 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.686 11:08:41 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.686 11:08:41 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:34.686 11:08:41 rpc -- scripts/common.sh@345 -- # : 1 00:05:34.686 11:08:41 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.686 11:08:41 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.686 11:08:41 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:34.686 11:08:41 rpc -- scripts/common.sh@353 -- # local d=1 00:05:34.686 11:08:41 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.686 11:08:41 rpc -- scripts/common.sh@355 -- # echo 1 00:05:34.686 11:08:41 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.686 11:08:41 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:34.686 11:08:41 rpc -- scripts/common.sh@353 -- # local d=2 00:05:34.686 11:08:41 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.686 11:08:41 rpc -- scripts/common.sh@355 -- # echo 2 00:05:34.686 11:08:41 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.686 11:08:41 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.686 11:08:41 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.686 11:08:41 rpc -- scripts/common.sh@368 -- # return 0 00:05:34.686 11:08:41 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.686 11:08:41 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:34.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.686 --rc genhtml_branch_coverage=1 00:05:34.686 --rc genhtml_function_coverage=1 00:05:34.686 --rc genhtml_legend=1 00:05:34.686 --rc geninfo_all_blocks=1 00:05:34.686 --rc geninfo_unexecuted_blocks=1 00:05:34.686 00:05:34.686 ' 00:05:34.686 11:08:41 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:34.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.686 --rc genhtml_branch_coverage=1 00:05:34.686 --rc genhtml_function_coverage=1 00:05:34.686 --rc genhtml_legend=1 00:05:34.686 --rc geninfo_all_blocks=1 00:05:34.686 --rc geninfo_unexecuted_blocks=1 00:05:34.686 00:05:34.686 ' 00:05:34.686 11:08:41 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:34.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.686 --rc genhtml_branch_coverage=1 00:05:34.686 --rc genhtml_function_coverage=1 00:05:34.686 --rc genhtml_legend=1 00:05:34.686 --rc geninfo_all_blocks=1 00:05:34.686 --rc geninfo_unexecuted_blocks=1 00:05:34.686 00:05:34.686 ' 00:05:34.686 11:08:41 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:34.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.686 --rc genhtml_branch_coverage=1 00:05:34.686 --rc genhtml_function_coverage=1 00:05:34.686 --rc genhtml_legend=1 00:05:34.686 --rc geninfo_all_blocks=1 00:05:34.686 --rc geninfo_unexecuted_blocks=1 00:05:34.686 00:05:34.686 ' 00:05:34.686 11:08:41 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57483 00:05:34.686 11:08:41 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:34.686 11:08:41 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.686 11:08:41 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57483 00:05:34.686 11:08:41 rpc -- common/autotest_common.sh@835 -- # '[' -z 57483 ']' 00:05:34.686 11:08:41 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.686 11:08:41 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.686 11:08:41 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.686 11:08:41 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.686 11:08:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.945 [2024-12-10 11:08:41.595233] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:34.945 [2024-12-10 11:08:41.595447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57483 ] 00:05:35.205 [2024-12-10 11:08:41.783588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.205 [2024-12-10 11:08:41.908038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:35.205 [2024-12-10 11:08:41.908118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57483' to capture a snapshot of events at runtime. 00:05:35.205 [2024-12-10 11:08:41.908139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:35.205 [2024-12-10 11:08:41.908157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:35.205 [2024-12-10 11:08:41.908171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57483 for offline analysis/debug. 00:05:35.205 [2024-12-10 11:08:41.909674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.463 [2024-12-10 11:08:42.131280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:36.094 11:08:42 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.094 11:08:42 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.094 11:08:42 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:36.094 11:08:42 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:36.094 11:08:42 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:36.094 11:08:42 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:36.094 11:08:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.094 11:08:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.094 11:08:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.094 ************************************ 00:05:36.094 START TEST rpc_integrity 00:05:36.094 ************************************ 00:05:36.094 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:36.094 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:36.094 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.094 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.094 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.094 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:36.094 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:36.094 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:36.094 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.095 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:36.095 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.095 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:36.095 { 00:05:36.095 "name": "Malloc0", 00:05:36.095 "aliases": [ 00:05:36.095 "75ebb363-f755-4350-8d68-677819b01c62" 00:05:36.095 ], 00:05:36.095 "product_name": "Malloc disk", 00:05:36.095 "block_size": 512, 00:05:36.095 "num_blocks": 16384, 00:05:36.095 "uuid": "75ebb363-f755-4350-8d68-677819b01c62", 00:05:36.095 "assigned_rate_limits": { 00:05:36.095 "rw_ios_per_sec": 0, 00:05:36.095 "rw_mbytes_per_sec": 0, 00:05:36.095 "r_mbytes_per_sec": 0, 00:05:36.095 "w_mbytes_per_sec": 0 00:05:36.095 }, 00:05:36.095 "claimed": false, 00:05:36.095 "zoned": false, 00:05:36.095 "supported_io_types": { 00:05:36.095 "read": true, 00:05:36.095 "write": true, 00:05:36.095 "unmap": true, 00:05:36.095 "flush": true, 00:05:36.095 "reset": true, 00:05:36.095 "nvme_admin": false, 00:05:36.095 "nvme_io": false, 00:05:36.095 "nvme_io_md": false, 00:05:36.095 "write_zeroes": true, 00:05:36.095 "zcopy": true, 00:05:36.095 "get_zone_info": false, 00:05:36.095 "zone_management": false, 00:05:36.095 "zone_append": false, 00:05:36.095 "compare": false, 00:05:36.095 "compare_and_write": false, 00:05:36.095 "abort": true, 00:05:36.095 "seek_hole": false, 00:05:36.095 "seek_data": false, 00:05:36.095 "copy": true, 00:05:36.095 "nvme_iov_md": false 00:05:36.095 }, 00:05:36.095 "memory_domains": [ 00:05:36.095 { 00:05:36.095 "dma_device_id": "system", 00:05:36.095 "dma_device_type": 1 00:05:36.095 }, 00:05:36.095 { 00:05:36.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.095 "dma_device_type": 2 00:05:36.095 } 00:05:36.095 ], 00:05:36.095 "driver_specific": {} 00:05:36.095 } 00:05:36.095 ]' 00:05:36.095 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:36.095 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:36.095 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.095 [2024-12-10 11:08:42.808646] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:36.095 [2024-12-10 11:08:42.808916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:36.095 [2024-12-10 11:08:42.808961] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:05:36.095 [2024-12-10 11:08:42.808978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:36.095 [2024-12-10 11:08:42.812089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:36.095 [2024-12-10 11:08:42.812257] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:36.095 Passthru0 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.095 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.095 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:36.095 { 00:05:36.095 "name": "Malloc0", 00:05:36.095 "aliases": [ 00:05:36.095 "75ebb363-f755-4350-8d68-677819b01c62" 00:05:36.095 ], 00:05:36.095 "product_name": "Malloc disk", 00:05:36.095 "block_size": 512, 00:05:36.095 "num_blocks": 16384, 00:05:36.095 "uuid": "75ebb363-f755-4350-8d68-677819b01c62", 00:05:36.095 "assigned_rate_limits": { 00:05:36.095 "rw_ios_per_sec": 0, 00:05:36.095 "rw_mbytes_per_sec": 0, 00:05:36.095 "r_mbytes_per_sec": 0, 00:05:36.095 "w_mbytes_per_sec": 0 00:05:36.095 }, 00:05:36.095 "claimed": true, 00:05:36.095 "claim_type": "exclusive_write", 00:05:36.095 "zoned": false, 00:05:36.095 "supported_io_types": { 00:05:36.095 "read": true, 00:05:36.095 "write": true, 00:05:36.095 "unmap": true, 00:05:36.095 "flush": true, 00:05:36.095 "reset": true, 00:05:36.095 "nvme_admin": false, 00:05:36.095 "nvme_io": false, 00:05:36.095 "nvme_io_md": false, 00:05:36.095 "write_zeroes": true, 00:05:36.095 "zcopy": true, 00:05:36.095 "get_zone_info": false, 00:05:36.095 "zone_management": false, 00:05:36.095 "zone_append": false, 00:05:36.095 "compare": false, 00:05:36.095 "compare_and_write": false, 00:05:36.095 "abort": true, 00:05:36.095 "seek_hole": false, 00:05:36.095 "seek_data": false, 00:05:36.095 "copy": true, 00:05:36.095 "nvme_iov_md": false 00:05:36.095 }, 00:05:36.095 "memory_domains": [ 00:05:36.095 { 00:05:36.095 "dma_device_id": "system", 00:05:36.095 "dma_device_type": 1 00:05:36.095 }, 00:05:36.095 { 00:05:36.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.095 "dma_device_type": 2 00:05:36.095 } 00:05:36.095 ], 00:05:36.095 "driver_specific": {} 00:05:36.095 }, 00:05:36.095 { 00:05:36.095 "name": "Passthru0", 00:05:36.095 "aliases": [ 00:05:36.095 "4acb7dee-46df-5b70-9375-9ca695f5e093" 00:05:36.095 ], 00:05:36.095 "product_name": "passthru", 00:05:36.095 "block_size": 512, 00:05:36.095 "num_blocks": 16384, 00:05:36.095 "uuid": "4acb7dee-46df-5b70-9375-9ca695f5e093", 00:05:36.095 "assigned_rate_limits": { 00:05:36.095 "rw_ios_per_sec": 0, 00:05:36.095 "rw_mbytes_per_sec": 0, 00:05:36.095 "r_mbytes_per_sec": 0, 00:05:36.095 "w_mbytes_per_sec": 0 00:05:36.095 }, 00:05:36.095 "claimed": false, 00:05:36.095 "zoned": false, 00:05:36.095 "supported_io_types": { 00:05:36.095 "read": true, 00:05:36.095 "write": true, 00:05:36.095 "unmap": true, 00:05:36.095 "flush": true, 00:05:36.095 "reset": true, 00:05:36.095 "nvme_admin": false, 00:05:36.095 "nvme_io": false, 00:05:36.095 "nvme_io_md": false, 00:05:36.095 "write_zeroes": true, 00:05:36.095 "zcopy": true, 00:05:36.095 "get_zone_info": false, 00:05:36.095 "zone_management": false, 00:05:36.095 "zone_append": false, 00:05:36.095 "compare": false, 00:05:36.095 "compare_and_write": false, 00:05:36.095 "abort": true, 00:05:36.095 "seek_hole": false, 00:05:36.095 "seek_data": false, 00:05:36.095 "copy": true, 00:05:36.095 "nvme_iov_md": false 00:05:36.095 }, 00:05:36.095 "memory_domains": [ 00:05:36.095 { 00:05:36.095 "dma_device_id": "system", 00:05:36.095 "dma_device_type": 1 00:05:36.095 }, 00:05:36.095 { 00:05:36.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.095 "dma_device_type": 2 00:05:36.095 } 00:05:36.095 ], 00:05:36.095 "driver_specific": { 00:05:36.095 "passthru": { 00:05:36.095 "name": "Passthru0", 00:05:36.095 "base_bdev_name": "Malloc0" 00:05:36.095 } 00:05:36.095 } 00:05:36.095 } 00:05:36.095 ]' 00:05:36.095 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:36.095 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:36.095 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.095 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.095 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.355 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.355 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:36.355 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.355 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.355 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.355 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:36.355 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:36.355 ************************************ 00:05:36.355 END TEST rpc_integrity 00:05:36.355 ************************************ 00:05:36.355 11:08:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:36.355 00:05:36.355 real 0m0.316s 00:05:36.355 user 0m0.194s 00:05:36.355 sys 0m0.029s 00:05:36.355 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.355 11:08:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.355 11:08:43 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:36.355 11:08:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.355 11:08:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.355 11:08:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.355 ************************************ 00:05:36.355 START TEST rpc_plugins 00:05:36.355 ************************************ 00:05:36.355 11:08:43 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:36.355 11:08:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:36.355 11:08:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.355 11:08:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.355 11:08:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.355 11:08:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:36.355 11:08:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:36.355 11:08:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.355 11:08:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.355 11:08:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.355 11:08:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:36.355 { 00:05:36.355 "name": "Malloc1", 00:05:36.355 "aliases": [ 00:05:36.355 "9cf5378b-cb60-45db-a472-eb8190956e69" 00:05:36.355 ], 00:05:36.355 "product_name": "Malloc disk", 00:05:36.355 "block_size": 4096, 00:05:36.355 "num_blocks": 256, 00:05:36.355 "uuid": "9cf5378b-cb60-45db-a472-eb8190956e69", 00:05:36.355 "assigned_rate_limits": { 00:05:36.355 "rw_ios_per_sec": 0, 00:05:36.355 "rw_mbytes_per_sec": 0, 00:05:36.355 "r_mbytes_per_sec": 0, 00:05:36.355 "w_mbytes_per_sec": 0 00:05:36.355 }, 00:05:36.355 "claimed": false, 00:05:36.355 "zoned": false, 00:05:36.355 "supported_io_types": { 00:05:36.355 "read": true, 00:05:36.355 "write": true, 00:05:36.355 "unmap": true, 00:05:36.355 "flush": true, 00:05:36.355 "reset": true, 00:05:36.355 "nvme_admin": false, 00:05:36.355 "nvme_io": false, 00:05:36.355 "nvme_io_md": false, 00:05:36.355 "write_zeroes": true, 00:05:36.355 "zcopy": true, 00:05:36.355 "get_zone_info": false, 00:05:36.355 "zone_management": false, 00:05:36.355 "zone_append": false, 00:05:36.355 "compare": false, 00:05:36.355 "compare_and_write": false, 00:05:36.355 "abort": true, 00:05:36.355 "seek_hole": false, 00:05:36.355 "seek_data": false, 00:05:36.355 "copy": true, 00:05:36.355 "nvme_iov_md": false 00:05:36.355 }, 00:05:36.355 "memory_domains": [ 00:05:36.355 { 00:05:36.355 "dma_device_id": "system", 00:05:36.355 "dma_device_type": 1 00:05:36.355 }, 00:05:36.355 { 00:05:36.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.355 "dma_device_type": 2 00:05:36.355 } 00:05:36.355 ], 00:05:36.355 "driver_specific": {} 00:05:36.355 } 00:05:36.355 ]' 00:05:36.355 11:08:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:36.355 11:08:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:36.355 11:08:43 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:36.355 11:08:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.355 11:08:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.355 11:08:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.355 11:08:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:36.355 11:08:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.355 11:08:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.355 11:08:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.355 11:08:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:36.355 11:08:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:36.614 ************************************ 00:05:36.614 END TEST rpc_plugins 00:05:36.614 ************************************ 00:05:36.614 11:08:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:36.614 00:05:36.614 real 0m0.167s 00:05:36.614 user 0m0.111s 00:05:36.614 sys 0m0.015s 00:05:36.614 11:08:43 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.614 11:08:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.614 11:08:43 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:36.614 11:08:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.614 11:08:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.614 11:08:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.614 ************************************ 00:05:36.614 START TEST rpc_trace_cmd_test 00:05:36.614 ************************************ 00:05:36.614 11:08:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:36.614 11:08:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:36.614 11:08:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:36.614 11:08:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.614 11:08:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:36.614 11:08:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.614 11:08:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:36.614 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57483", 00:05:36.614 "tpoint_group_mask": "0x8", 00:05:36.614 "iscsi_conn": { 00:05:36.614 "mask": "0x2", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 }, 00:05:36.614 "scsi": { 00:05:36.614 "mask": "0x4", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 }, 00:05:36.614 "bdev": { 00:05:36.614 "mask": "0x8", 00:05:36.614 "tpoint_mask": "0xffffffffffffffff" 00:05:36.614 }, 00:05:36.614 "nvmf_rdma": { 00:05:36.614 "mask": "0x10", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 }, 00:05:36.614 "nvmf_tcp": { 00:05:36.614 "mask": "0x20", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 }, 00:05:36.614 "ftl": { 00:05:36.614 "mask": "0x40", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 }, 00:05:36.614 "blobfs": { 00:05:36.614 "mask": "0x80", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 }, 00:05:36.614 "dsa": { 00:05:36.614 "mask": "0x200", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 }, 00:05:36.614 "thread": { 00:05:36.614 "mask": "0x400", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 }, 00:05:36.614 "nvme_pcie": { 00:05:36.614 "mask": "0x800", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 }, 00:05:36.614 "iaa": { 00:05:36.614 "mask": "0x1000", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 }, 00:05:36.614 "nvme_tcp": { 00:05:36.614 "mask": "0x2000", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 }, 00:05:36.614 "bdev_nvme": { 00:05:36.614 "mask": "0x4000", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 }, 00:05:36.614 "sock": { 00:05:36.614 "mask": "0x8000", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 }, 00:05:36.614 "blob": { 00:05:36.614 "mask": "0x10000", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 }, 00:05:36.614 "bdev_raid": { 00:05:36.614 "mask": "0x20000", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 }, 00:05:36.614 "scheduler": { 00:05:36.614 "mask": "0x40000", 00:05:36.614 "tpoint_mask": "0x0" 00:05:36.614 } 00:05:36.614 }' 00:05:36.614 11:08:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:36.614 11:08:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:36.614 11:08:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:36.614 11:08:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:36.614 11:08:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:36.614 11:08:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:36.614 11:08:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:36.873 11:08:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:36.873 11:08:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:36.873 ************************************ 00:05:36.873 END TEST rpc_trace_cmd_test 00:05:36.873 ************************************ 00:05:36.873 11:08:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:36.873 00:05:36.873 real 0m0.277s 00:05:36.873 user 0m0.250s 00:05:36.873 sys 0m0.017s 00:05:36.873 11:08:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.873 11:08:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:36.873 11:08:43 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:36.873 11:08:43 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:36.873 11:08:43 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:36.873 11:08:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.873 11:08:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.873 11:08:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.873 ************************************ 00:05:36.873 START TEST rpc_daemon_integrity 00:05:36.873 ************************************ 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:36.873 { 00:05:36.873 "name": "Malloc2", 00:05:36.873 "aliases": [ 00:05:36.873 "c2d3a1a3-872d-4ed4-8123-7f9bcff2f91b" 00:05:36.873 ], 00:05:36.873 "product_name": "Malloc disk", 00:05:36.873 "block_size": 512, 00:05:36.873 "num_blocks": 16384, 00:05:36.873 "uuid": "c2d3a1a3-872d-4ed4-8123-7f9bcff2f91b", 00:05:36.873 "assigned_rate_limits": { 00:05:36.873 "rw_ios_per_sec": 0, 00:05:36.873 "rw_mbytes_per_sec": 0, 00:05:36.873 "r_mbytes_per_sec": 0, 00:05:36.873 "w_mbytes_per_sec": 0 00:05:36.873 }, 00:05:36.873 "claimed": false, 00:05:36.873 "zoned": false, 00:05:36.873 "supported_io_types": { 00:05:36.873 "read": true, 00:05:36.873 "write": true, 00:05:36.873 "unmap": true, 00:05:36.873 "flush": true, 00:05:36.873 "reset": true, 00:05:36.873 "nvme_admin": false, 00:05:36.873 "nvme_io": false, 00:05:36.873 "nvme_io_md": false, 00:05:36.873 "write_zeroes": true, 00:05:36.873 "zcopy": true, 00:05:36.873 "get_zone_info": false, 00:05:36.873 "zone_management": false, 00:05:36.873 "zone_append": false, 00:05:36.873 "compare": false, 00:05:36.873 "compare_and_write": false, 00:05:36.873 "abort": true, 00:05:36.873 "seek_hole": false, 00:05:36.873 "seek_data": false, 00:05:36.873 "copy": true, 00:05:36.873 "nvme_iov_md": false 00:05:36.873 }, 00:05:36.873 "memory_domains": [ 00:05:36.873 { 00:05:36.873 "dma_device_id": "system", 00:05:36.873 "dma_device_type": 1 00:05:36.873 }, 00:05:36.873 { 00:05:36.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.873 "dma_device_type": 2 00:05:36.873 } 00:05:36.873 ], 00:05:36.873 "driver_specific": {} 00:05:36.873 } 00:05:36.873 ]' 00:05:36.873 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.132 [2024-12-10 11:08:43.734530] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:37.132 [2024-12-10 11:08:43.734593] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:37.132 [2024-12-10 11:08:43.734627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:05:37.132 [2024-12-10 11:08:43.734642] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:37.132 [2024-12-10 11:08:43.737483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:37.132 [2024-12-10 11:08:43.737526] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:37.132 Passthru0 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:37.132 { 00:05:37.132 "name": "Malloc2", 00:05:37.132 "aliases": [ 00:05:37.132 "c2d3a1a3-872d-4ed4-8123-7f9bcff2f91b" 00:05:37.132 ], 00:05:37.132 "product_name": "Malloc disk", 00:05:37.132 "block_size": 512, 00:05:37.132 "num_blocks": 16384, 00:05:37.132 "uuid": "c2d3a1a3-872d-4ed4-8123-7f9bcff2f91b", 00:05:37.132 "assigned_rate_limits": { 00:05:37.132 "rw_ios_per_sec": 0, 00:05:37.132 "rw_mbytes_per_sec": 0, 00:05:37.132 "r_mbytes_per_sec": 0, 00:05:37.132 "w_mbytes_per_sec": 0 00:05:37.132 }, 00:05:37.132 "claimed": true, 00:05:37.132 "claim_type": "exclusive_write", 00:05:37.132 "zoned": false, 00:05:37.132 "supported_io_types": { 00:05:37.132 "read": true, 00:05:37.132 "write": true, 00:05:37.132 "unmap": true, 00:05:37.132 "flush": true, 00:05:37.132 "reset": true, 00:05:37.132 "nvme_admin": false, 00:05:37.132 "nvme_io": false, 00:05:37.132 "nvme_io_md": false, 00:05:37.132 "write_zeroes": true, 00:05:37.132 "zcopy": true, 00:05:37.132 "get_zone_info": false, 00:05:37.132 "zone_management": false, 00:05:37.132 "zone_append": false, 00:05:37.132 "compare": false, 00:05:37.132 "compare_and_write": false, 00:05:37.132 "abort": true, 00:05:37.132 "seek_hole": false, 00:05:37.132 "seek_data": false, 00:05:37.132 "copy": true, 00:05:37.132 "nvme_iov_md": false 00:05:37.132 }, 00:05:37.132 "memory_domains": [ 00:05:37.132 { 00:05:37.132 "dma_device_id": "system", 00:05:37.132 "dma_device_type": 1 00:05:37.132 }, 00:05:37.132 { 00:05:37.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.132 "dma_device_type": 2 00:05:37.132 } 00:05:37.132 ], 00:05:37.132 "driver_specific": {} 00:05:37.132 }, 00:05:37.132 { 00:05:37.132 "name": "Passthru0", 00:05:37.132 "aliases": [ 00:05:37.132 "4542840f-d89b-50bc-b3f0-c3368ebff235" 00:05:37.132 ], 00:05:37.132 "product_name": "passthru", 00:05:37.132 "block_size": 512, 00:05:37.132 "num_blocks": 16384, 00:05:37.132 "uuid": "4542840f-d89b-50bc-b3f0-c3368ebff235", 00:05:37.132 "assigned_rate_limits": { 00:05:37.132 "rw_ios_per_sec": 0, 00:05:37.132 "rw_mbytes_per_sec": 0, 00:05:37.132 "r_mbytes_per_sec": 0, 00:05:37.132 "w_mbytes_per_sec": 0 00:05:37.132 }, 00:05:37.132 "claimed": false, 00:05:37.132 "zoned": false, 00:05:37.132 "supported_io_types": { 00:05:37.132 "read": true, 00:05:37.132 "write": true, 00:05:37.132 "unmap": true, 00:05:37.132 "flush": true, 00:05:37.132 "reset": true, 00:05:37.132 "nvme_admin": false, 00:05:37.132 "nvme_io": false, 00:05:37.132 "nvme_io_md": false, 00:05:37.132 "write_zeroes": true, 00:05:37.132 "zcopy": true, 00:05:37.132 "get_zone_info": false, 00:05:37.132 "zone_management": false, 00:05:37.132 "zone_append": false, 00:05:37.132 "compare": false, 00:05:37.132 "compare_and_write": false, 00:05:37.132 "abort": true, 00:05:37.132 "seek_hole": false, 00:05:37.132 "seek_data": false, 00:05:37.132 "copy": true, 00:05:37.132 "nvme_iov_md": false 00:05:37.132 }, 00:05:37.132 "memory_domains": [ 00:05:37.132 { 00:05:37.132 "dma_device_id": "system", 00:05:37.132 "dma_device_type": 1 00:05:37.132 }, 00:05:37.132 { 00:05:37.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.132 "dma_device_type": 2 00:05:37.132 } 00:05:37.132 ], 00:05:37.132 "driver_specific": { 00:05:37.132 "passthru": { 00:05:37.132 "name": "Passthru0", 00:05:37.132 "base_bdev_name": "Malloc2" 00:05:37.132 } 00:05:37.132 } 00:05:37.132 } 00:05:37.132 ]' 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:37.132 ************************************ 00:05:37.132 END TEST rpc_daemon_integrity 00:05:37.132 ************************************ 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.132 00:05:37.132 real 0m0.348s 00:05:37.132 user 0m0.218s 00:05:37.132 sys 0m0.034s 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.132 11:08:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.391 11:08:43 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:37.391 11:08:43 rpc -- rpc/rpc.sh@84 -- # killprocess 57483 00:05:37.391 11:08:43 rpc -- common/autotest_common.sh@954 -- # '[' -z 57483 ']' 00:05:37.391 11:08:43 rpc -- common/autotest_common.sh@958 -- # kill -0 57483 00:05:37.391 11:08:43 rpc -- common/autotest_common.sh@959 -- # uname 00:05:37.391 11:08:43 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.391 11:08:43 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57483 00:05:37.391 killing process with pid 57483 00:05:37.391 11:08:43 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.391 11:08:43 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.391 11:08:43 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57483' 00:05:37.391 11:08:43 rpc -- common/autotest_common.sh@973 -- # kill 57483 00:05:37.391 11:08:43 rpc -- common/autotest_common.sh@978 -- # wait 57483 00:05:39.295 00:05:39.295 real 0m4.636s 00:05:39.295 user 0m5.410s 00:05:39.295 sys 0m0.776s 00:05:39.295 11:08:45 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.295 11:08:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.295 ************************************ 00:05:39.295 END TEST rpc 00:05:39.295 ************************************ 00:05:39.295 11:08:45 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:39.295 11:08:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.295 11:08:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.295 11:08:45 -- common/autotest_common.sh@10 -- # set +x 00:05:39.295 ************************************ 00:05:39.295 START TEST skip_rpc 00:05:39.295 ************************************ 00:05:39.295 11:08:45 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:39.295 * Looking for test storage... 00:05:39.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.295 11:08:46 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:39.295 11:08:46 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:39.295 11:08:46 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:39.554 11:08:46 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.554 11:08:46 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:39.554 11:08:46 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.554 11:08:46 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:39.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.554 --rc genhtml_branch_coverage=1 00:05:39.554 --rc genhtml_function_coverage=1 00:05:39.554 --rc genhtml_legend=1 00:05:39.554 --rc geninfo_all_blocks=1 00:05:39.554 --rc geninfo_unexecuted_blocks=1 00:05:39.554 00:05:39.554 ' 00:05:39.554 11:08:46 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:39.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.554 --rc genhtml_branch_coverage=1 00:05:39.554 --rc genhtml_function_coverage=1 00:05:39.554 --rc genhtml_legend=1 00:05:39.554 --rc geninfo_all_blocks=1 00:05:39.554 --rc geninfo_unexecuted_blocks=1 00:05:39.554 00:05:39.554 ' 00:05:39.554 11:08:46 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:39.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.554 --rc genhtml_branch_coverage=1 00:05:39.554 --rc genhtml_function_coverage=1 00:05:39.554 --rc genhtml_legend=1 00:05:39.554 --rc geninfo_all_blocks=1 00:05:39.554 --rc geninfo_unexecuted_blocks=1 00:05:39.554 00:05:39.554 ' 00:05:39.554 11:08:46 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:39.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.554 --rc genhtml_branch_coverage=1 00:05:39.554 --rc genhtml_function_coverage=1 00:05:39.554 --rc genhtml_legend=1 00:05:39.554 --rc geninfo_all_blocks=1 00:05:39.554 --rc geninfo_unexecuted_blocks=1 00:05:39.554 00:05:39.554 ' 00:05:39.554 11:08:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:39.554 11:08:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:39.554 11:08:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:39.554 11:08:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.554 11:08:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.554 11:08:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.554 ************************************ 00:05:39.554 START TEST skip_rpc 00:05:39.554 ************************************ 00:05:39.554 11:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:39.554 11:08:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57706 00:05:39.554 11:08:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.554 11:08:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:39.554 11:08:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:39.554 [2024-12-10 11:08:46.274925] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:39.554 [2024-12-10 11:08:46.275272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57706 ] 00:05:39.813 [2024-12-10 11:08:46.444861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.813 [2024-12-10 11:08:46.544089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.072 [2024-12-10 11:08:46.748797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57706 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57706 ']' 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57706 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57706 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57706' 00:05:45.343 killing process with pid 57706 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57706 00:05:45.343 11:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57706 00:05:46.722 ************************************ 00:05:46.722 END TEST skip_rpc 00:05:46.722 ************************************ 00:05:46.722 00:05:46.722 real 0m6.974s 00:05:46.722 user 0m6.519s 00:05:46.722 sys 0m0.347s 00:05:46.722 11:08:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.722 11:08:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.722 11:08:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:46.722 11:08:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.722 11:08:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.722 11:08:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.722 ************************************ 00:05:46.722 START TEST skip_rpc_with_json 00:05:46.722 ************************************ 00:05:46.722 11:08:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:46.722 11:08:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:46.722 11:08:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57810 00:05:46.722 11:08:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.722 11:08:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.722 11:08:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57810 00:05:46.722 11:08:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57810 ']' 00:05:46.722 11:08:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.722 11:08:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.722 11:08:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.722 11:08:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.722 11:08:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.722 [2024-12-10 11:08:53.322039] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:46.722 [2024-12-10 11:08:53.322232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57810 ] 00:05:46.722 [2024-12-10 11:08:53.500120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.981 [2024-12-10 11:08:53.599183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.981 [2024-12-10 11:08:53.802102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.548 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.548 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:47.548 11:08:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:47.548 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.548 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.548 [2024-12-10 11:08:54.318445] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:47.548 request: 00:05:47.548 { 00:05:47.548 "trtype": "tcp", 00:05:47.548 "method": "nvmf_get_transports", 00:05:47.548 "req_id": 1 00:05:47.548 } 00:05:47.548 Got JSON-RPC error response 00:05:47.548 response: 00:05:47.548 { 00:05:47.548 "code": -19, 00:05:47.548 "message": "No such device" 00:05:47.548 } 00:05:47.548 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:47.548 11:08:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:47.548 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.548 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.548 [2024-12-10 11:08:54.330585] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:47.548 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.548 11:08:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:47.548 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.548 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.808 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.808 11:08:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:47.808 { 00:05:47.808 "subsystems": [ 00:05:47.808 { 00:05:47.808 "subsystem": "fsdev", 00:05:47.808 "config": [ 00:05:47.808 { 00:05:47.808 "method": "fsdev_set_opts", 00:05:47.808 "params": { 00:05:47.808 "fsdev_io_pool_size": 65535, 00:05:47.808 "fsdev_io_cache_size": 256 00:05:47.808 } 00:05:47.808 } 00:05:47.808 ] 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "subsystem": "vfio_user_target", 00:05:47.808 "config": null 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "subsystem": "keyring", 00:05:47.808 "config": [] 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "subsystem": "iobuf", 00:05:47.808 "config": [ 00:05:47.808 { 00:05:47.808 "method": "iobuf_set_options", 00:05:47.808 "params": { 00:05:47.808 "small_pool_count": 8192, 00:05:47.808 "large_pool_count": 1024, 00:05:47.808 "small_bufsize": 8192, 00:05:47.808 "large_bufsize": 135168, 00:05:47.808 "enable_numa": false 00:05:47.808 } 00:05:47.808 } 00:05:47.808 ] 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "subsystem": "sock", 00:05:47.808 "config": [ 00:05:47.808 { 00:05:47.808 "method": "sock_set_default_impl", 00:05:47.808 "params": { 00:05:47.808 "impl_name": "uring" 00:05:47.808 } 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "method": "sock_impl_set_options", 00:05:47.808 "params": { 00:05:47.808 "impl_name": "ssl", 00:05:47.808 "recv_buf_size": 4096, 00:05:47.808 "send_buf_size": 4096, 00:05:47.808 "enable_recv_pipe": true, 00:05:47.808 "enable_quickack": false, 00:05:47.808 "enable_placement_id": 0, 00:05:47.808 "enable_zerocopy_send_server": true, 00:05:47.808 "enable_zerocopy_send_client": false, 00:05:47.808 "zerocopy_threshold": 0, 00:05:47.808 "tls_version": 0, 00:05:47.808 "enable_ktls": false 00:05:47.808 } 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "method": "sock_impl_set_options", 00:05:47.808 "params": { 00:05:47.808 "impl_name": "posix", 00:05:47.808 "recv_buf_size": 2097152, 00:05:47.808 "send_buf_size": 2097152, 00:05:47.808 "enable_recv_pipe": true, 00:05:47.808 "enable_quickack": false, 00:05:47.808 "enable_placement_id": 0, 00:05:47.808 "enable_zerocopy_send_server": true, 00:05:47.808 "enable_zerocopy_send_client": false, 00:05:47.808 "zerocopy_threshold": 0, 00:05:47.808 "tls_version": 0, 00:05:47.808 "enable_ktls": false 00:05:47.808 } 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "method": "sock_impl_set_options", 00:05:47.808 "params": { 00:05:47.808 "impl_name": "uring", 00:05:47.808 "recv_buf_size": 2097152, 00:05:47.808 "send_buf_size": 2097152, 00:05:47.808 "enable_recv_pipe": true, 00:05:47.808 "enable_quickack": false, 00:05:47.808 "enable_placement_id": 0, 00:05:47.808 "enable_zerocopy_send_server": false, 00:05:47.808 "enable_zerocopy_send_client": false, 00:05:47.808 "zerocopy_threshold": 0, 00:05:47.808 "tls_version": 0, 00:05:47.808 "enable_ktls": false 00:05:47.808 } 00:05:47.808 } 00:05:47.808 ] 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "subsystem": "vmd", 00:05:47.808 "config": [] 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "subsystem": "accel", 00:05:47.808 "config": [ 00:05:47.808 { 00:05:47.808 "method": "accel_set_options", 00:05:47.808 "params": { 00:05:47.808 "small_cache_size": 128, 00:05:47.808 "large_cache_size": 16, 00:05:47.808 "task_count": 2048, 00:05:47.808 "sequence_count": 2048, 00:05:47.808 "buf_count": 2048 00:05:47.808 } 00:05:47.808 } 00:05:47.808 ] 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "subsystem": "bdev", 00:05:47.808 "config": [ 00:05:47.808 { 00:05:47.808 "method": "bdev_set_options", 00:05:47.808 "params": { 00:05:47.808 "bdev_io_pool_size": 65535, 00:05:47.808 "bdev_io_cache_size": 256, 00:05:47.808 "bdev_auto_examine": true, 00:05:47.808 "iobuf_small_cache_size": 128, 00:05:47.808 "iobuf_large_cache_size": 16 00:05:47.808 } 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "method": "bdev_raid_set_options", 00:05:47.808 "params": { 00:05:47.808 "process_window_size_kb": 1024, 00:05:47.808 "process_max_bandwidth_mb_sec": 0 00:05:47.808 } 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "method": "bdev_iscsi_set_options", 00:05:47.808 "params": { 00:05:47.808 "timeout_sec": 30 00:05:47.808 } 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "method": "bdev_nvme_set_options", 00:05:47.808 "params": { 00:05:47.808 "action_on_timeout": "none", 00:05:47.808 "timeout_us": 0, 00:05:47.808 "timeout_admin_us": 0, 00:05:47.808 "keep_alive_timeout_ms": 10000, 00:05:47.808 "arbitration_burst": 0, 00:05:47.808 "low_priority_weight": 0, 00:05:47.808 "medium_priority_weight": 0, 00:05:47.808 "high_priority_weight": 0, 00:05:47.808 "nvme_adminq_poll_period_us": 10000, 00:05:47.808 "nvme_ioq_poll_period_us": 0, 00:05:47.808 "io_queue_requests": 0, 00:05:47.808 "delay_cmd_submit": true, 00:05:47.808 "transport_retry_count": 4, 00:05:47.808 "bdev_retry_count": 3, 00:05:47.808 "transport_ack_timeout": 0, 00:05:47.808 "ctrlr_loss_timeout_sec": 0, 00:05:47.808 "reconnect_delay_sec": 0, 00:05:47.808 "fast_io_fail_timeout_sec": 0, 00:05:47.808 "disable_auto_failback": false, 00:05:47.808 "generate_uuids": false, 00:05:47.808 "transport_tos": 0, 00:05:47.808 "nvme_error_stat": false, 00:05:47.808 "rdma_srq_size": 0, 00:05:47.808 "io_path_stat": false, 00:05:47.808 "allow_accel_sequence": false, 00:05:47.808 "rdma_max_cq_size": 0, 00:05:47.808 "rdma_cm_event_timeout_ms": 0, 00:05:47.808 "dhchap_digests": [ 00:05:47.808 "sha256", 00:05:47.808 "sha384", 00:05:47.808 "sha512" 00:05:47.808 ], 00:05:47.808 "dhchap_dhgroups": [ 00:05:47.808 "null", 00:05:47.808 "ffdhe2048", 00:05:47.808 "ffdhe3072", 00:05:47.808 "ffdhe4096", 00:05:47.808 "ffdhe6144", 00:05:47.808 "ffdhe8192" 00:05:47.808 ] 00:05:47.808 } 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "method": "bdev_nvme_set_hotplug", 00:05:47.808 "params": { 00:05:47.808 "period_us": 100000, 00:05:47.808 "enable": false 00:05:47.808 } 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "method": "bdev_wait_for_examine" 00:05:47.808 } 00:05:47.808 ] 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "subsystem": "scsi", 00:05:47.808 "config": null 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "subsystem": "scheduler", 00:05:47.808 "config": [ 00:05:47.808 { 00:05:47.808 "method": "framework_set_scheduler", 00:05:47.808 "params": { 00:05:47.808 "name": "static" 00:05:47.808 } 00:05:47.808 } 00:05:47.808 ] 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "subsystem": "vhost_scsi", 00:05:47.808 "config": [] 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "subsystem": "vhost_blk", 00:05:47.808 "config": [] 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "subsystem": "ublk", 00:05:47.808 "config": [] 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "subsystem": "nbd", 00:05:47.808 "config": [] 00:05:47.808 }, 00:05:47.808 { 00:05:47.808 "subsystem": "nvmf", 00:05:47.808 "config": [ 00:05:47.808 { 00:05:47.808 "method": "nvmf_set_config", 00:05:47.808 "params": { 00:05:47.808 "discovery_filter": "match_any", 00:05:47.808 "admin_cmd_passthru": { 00:05:47.808 "identify_ctrlr": false 00:05:47.808 }, 00:05:47.808 "dhchap_digests": [ 00:05:47.808 "sha256", 00:05:47.808 "sha384", 00:05:47.808 "sha512" 00:05:47.809 ], 00:05:47.809 "dhchap_dhgroups": [ 00:05:47.809 "null", 00:05:47.809 "ffdhe2048", 00:05:47.809 "ffdhe3072", 00:05:47.809 "ffdhe4096", 00:05:47.809 "ffdhe6144", 00:05:47.809 "ffdhe8192" 00:05:47.809 ] 00:05:47.809 } 00:05:47.809 }, 00:05:47.809 { 00:05:47.809 "method": "nvmf_set_max_subsystems", 00:05:47.809 "params": { 00:05:47.809 "max_subsystems": 1024 00:05:47.809 } 00:05:47.809 }, 00:05:47.809 { 00:05:47.809 "method": "nvmf_set_crdt", 00:05:47.809 "params": { 00:05:47.809 "crdt1": 0, 00:05:47.809 "crdt2": 0, 00:05:47.809 "crdt3": 0 00:05:47.809 } 00:05:47.809 }, 00:05:47.809 { 00:05:47.809 "method": "nvmf_create_transport", 00:05:47.809 "params": { 00:05:47.809 "trtype": "TCP", 00:05:47.809 "max_queue_depth": 128, 00:05:47.809 "max_io_qpairs_per_ctrlr": 127, 00:05:47.809 "in_capsule_data_size": 4096, 00:05:47.809 "max_io_size": 131072, 00:05:47.809 "io_unit_size": 131072, 00:05:47.809 "max_aq_depth": 128, 00:05:47.809 "num_shared_buffers": 511, 00:05:47.809 "buf_cache_size": 4294967295, 00:05:47.809 "dif_insert_or_strip": false, 00:05:47.809 "zcopy": false, 00:05:47.809 "c2h_success": true, 00:05:47.809 "sock_priority": 0, 00:05:47.809 "abort_timeout_sec": 1, 00:05:47.809 "ack_timeout": 0, 00:05:47.809 "data_wr_pool_size": 0 00:05:47.809 } 00:05:47.809 } 00:05:47.809 ] 00:05:47.809 }, 00:05:47.809 { 00:05:47.809 "subsystem": "iscsi", 00:05:47.809 "config": [ 00:05:47.809 { 00:05:47.809 "method": "iscsi_set_options", 00:05:47.809 "params": { 00:05:47.809 "node_base": "iqn.2016-06.io.spdk", 00:05:47.809 "max_sessions": 128, 00:05:47.809 "max_connections_per_session": 2, 00:05:47.809 "max_queue_depth": 64, 00:05:47.809 "default_time2wait": 2, 00:05:47.809 "default_time2retain": 20, 00:05:47.809 "first_burst_length": 8192, 00:05:47.809 "immediate_data": true, 00:05:47.809 "allow_duplicated_isid": false, 00:05:47.809 "error_recovery_level": 0, 00:05:47.809 "nop_timeout": 60, 00:05:47.809 "nop_in_interval": 30, 00:05:47.809 "disable_chap": false, 00:05:47.809 "require_chap": false, 00:05:47.809 "mutual_chap": false, 00:05:47.809 "chap_group": 0, 00:05:47.809 "max_large_datain_per_connection": 64, 00:05:47.809 "max_r2t_per_connection": 4, 00:05:47.809 "pdu_pool_size": 36864, 00:05:47.809 "immediate_data_pool_size": 16384, 00:05:47.809 "data_out_pool_size": 2048 00:05:47.809 } 00:05:47.809 } 00:05:47.809 ] 00:05:47.809 } 00:05:47.809 ] 00:05:47.809 } 00:05:47.809 11:08:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:47.809 11:08:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57810 00:05:47.809 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57810 ']' 00:05:47.809 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57810 00:05:47.809 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:47.809 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.809 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57810 00:05:47.809 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.809 killing process with pid 57810 00:05:47.809 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.809 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57810' 00:05:47.809 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57810 00:05:47.809 11:08:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57810 00:05:49.712 11:08:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57855 00:05:49.712 11:08:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.712 11:08:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:54.980 11:09:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57855 00:05:54.980 11:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57855 ']' 00:05:54.980 11:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57855 00:05:54.980 11:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:54.980 11:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.980 11:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57855 00:05:54.980 11:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.980 11:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.980 killing process with pid 57855 00:05:54.980 11:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57855' 00:05:54.980 11:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57855 00:05:54.980 11:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57855 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:56.884 00:05:56.884 real 0m10.305s 00:05:56.884 user 0m9.930s 00:05:56.884 sys 0m0.771s 00:05:56.884 ************************************ 00:05:56.884 END TEST skip_rpc_with_json 00:05:56.884 ************************************ 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.884 11:09:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:56.884 11:09:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.884 11:09:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.884 11:09:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.884 ************************************ 00:05:56.884 START TEST skip_rpc_with_delay 00:05:56.884 ************************************ 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:56.884 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.884 [2024-12-10 11:09:03.678411] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:57.143 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:57.143 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.143 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.143 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.143 00:05:57.143 real 0m0.198s 00:05:57.143 user 0m0.100s 00:05:57.143 sys 0m0.096s 00:05:57.143 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.143 ************************************ 00:05:57.143 END TEST skip_rpc_with_delay 00:05:57.143 ************************************ 00:05:57.143 11:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:57.143 11:09:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:57.143 11:09:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:57.143 11:09:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:57.143 11:09:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.143 11:09:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.143 11:09:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.143 ************************************ 00:05:57.143 START TEST exit_on_failed_rpc_init 00:05:57.143 ************************************ 00:05:57.143 11:09:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:57.143 11:09:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57983 00:05:57.143 11:09:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.143 11:09:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57983 00:05:57.143 11:09:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57983 ']' 00:05:57.143 11:09:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.144 11:09:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.144 11:09:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.144 11:09:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.144 11:09:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:57.144 [2024-12-10 11:09:03.936214] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:57.144 [2024-12-10 11:09:03.936432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57983 ] 00:05:57.402 [2024-12-10 11:09:04.114345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.402 [2024-12-10 11:09:04.205847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.661 [2024-12-10 11:09:04.419398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.228 11:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.228 11:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:58.228 11:09:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.228 11:09:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:58.228 11:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:58.228 11:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:58.228 11:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:58.228 11:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.228 11:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:58.228 11:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.228 11:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:58.228 11:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.228 11:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:58.228 11:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:58.228 11:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:58.486 [2024-12-10 11:09:05.079962] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:05:58.486 [2024-12-10 11:09:05.080154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58007 ] 00:05:58.486 [2024-12-10 11:09:05.267892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.744 [2024-12-10 11:09:05.394250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.744 [2024-12-10 11:09:05.394398] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:58.744 [2024-12-10 11:09:05.394425] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:58.744 [2024-12-10 11:09:05.394446] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:59.002 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:59.002 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.002 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:59.002 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:59.002 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:59.002 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.002 11:09:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:59.002 11:09:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57983 00:05:59.003 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57983 ']' 00:05:59.003 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57983 00:05:59.003 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:59.003 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.003 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57983 00:05:59.003 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.003 killing process with pid 57983 00:05:59.003 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.003 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57983' 00:05:59.003 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57983 00:05:59.003 11:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57983 00:06:00.905 00:06:00.905 real 0m3.799s 00:06:00.905 user 0m4.304s 00:06:00.905 sys 0m0.537s 00:06:00.905 ************************************ 00:06:00.905 END TEST exit_on_failed_rpc_init 00:06:00.905 ************************************ 00:06:00.905 11:09:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.905 11:09:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.905 11:09:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:00.905 ************************************ 00:06:00.905 END TEST skip_rpc 00:06:00.905 ************************************ 00:06:00.905 00:06:00.905 real 0m21.681s 00:06:00.905 user 0m21.036s 00:06:00.905 sys 0m1.958s 00:06:00.906 11:09:07 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.906 11:09:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.906 11:09:07 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:00.906 11:09:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.906 11:09:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.906 11:09:07 -- common/autotest_common.sh@10 -- # set +x 00:06:00.906 ************************************ 00:06:00.906 START TEST rpc_client 00:06:00.906 ************************************ 00:06:00.906 11:09:07 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:01.164 * Looking for test storage... 00:06:01.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:01.164 11:09:07 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:01.164 11:09:07 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:01.164 11:09:07 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:01.164 11:09:07 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:01.164 11:09:07 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:01.165 11:09:07 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.165 11:09:07 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:01.165 11:09:07 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.165 11:09:07 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:01.165 11:09:07 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:01.165 11:09:07 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.165 11:09:07 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:01.165 11:09:07 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.165 11:09:07 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.165 11:09:07 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.165 11:09:07 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:01.165 11:09:07 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.165 11:09:07 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:01.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.165 --rc genhtml_branch_coverage=1 00:06:01.165 --rc genhtml_function_coverage=1 00:06:01.165 --rc genhtml_legend=1 00:06:01.165 --rc geninfo_all_blocks=1 00:06:01.165 --rc geninfo_unexecuted_blocks=1 00:06:01.165 00:06:01.165 ' 00:06:01.165 11:09:07 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:01.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.165 --rc genhtml_branch_coverage=1 00:06:01.165 --rc genhtml_function_coverage=1 00:06:01.165 --rc genhtml_legend=1 00:06:01.165 --rc geninfo_all_blocks=1 00:06:01.165 --rc geninfo_unexecuted_blocks=1 00:06:01.165 00:06:01.165 ' 00:06:01.165 11:09:07 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:01.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.165 --rc genhtml_branch_coverage=1 00:06:01.165 --rc genhtml_function_coverage=1 00:06:01.165 --rc genhtml_legend=1 00:06:01.165 --rc geninfo_all_blocks=1 00:06:01.165 --rc geninfo_unexecuted_blocks=1 00:06:01.165 00:06:01.165 ' 00:06:01.165 11:09:07 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:01.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.165 --rc genhtml_branch_coverage=1 00:06:01.165 --rc genhtml_function_coverage=1 00:06:01.165 --rc genhtml_legend=1 00:06:01.165 --rc geninfo_all_blocks=1 00:06:01.165 --rc geninfo_unexecuted_blocks=1 00:06:01.165 00:06:01.165 ' 00:06:01.165 11:09:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:01.165 OK 00:06:01.165 11:09:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:01.165 00:06:01.165 real 0m0.237s 00:06:01.165 user 0m0.143s 00:06:01.165 sys 0m0.104s 00:06:01.165 11:09:07 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.165 ************************************ 00:06:01.165 END TEST rpc_client 00:06:01.165 ************************************ 00:06:01.165 11:09:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:01.165 11:09:07 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:01.165 11:09:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.165 11:09:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.165 11:09:07 -- common/autotest_common.sh@10 -- # set +x 00:06:01.165 ************************************ 00:06:01.165 START TEST json_config 00:06:01.165 ************************************ 00:06:01.165 11:09:07 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:01.423 11:09:08 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:01.423 11:09:08 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:01.423 11:09:08 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:01.423 11:09:08 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:01.423 11:09:08 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.423 11:09:08 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.423 11:09:08 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.423 11:09:08 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.423 11:09:08 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.423 11:09:08 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.423 11:09:08 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.423 11:09:08 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.423 11:09:08 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.423 11:09:08 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.423 11:09:08 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.423 11:09:08 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:01.423 11:09:08 json_config -- scripts/common.sh@345 -- # : 1 00:06:01.423 11:09:08 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.423 11:09:08 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.423 11:09:08 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:01.423 11:09:08 json_config -- scripts/common.sh@353 -- # local d=1 00:06:01.423 11:09:08 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.423 11:09:08 json_config -- scripts/common.sh@355 -- # echo 1 00:06:01.423 11:09:08 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.423 11:09:08 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:01.423 11:09:08 json_config -- scripts/common.sh@353 -- # local d=2 00:06:01.423 11:09:08 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.423 11:09:08 json_config -- scripts/common.sh@355 -- # echo 2 00:06:01.423 11:09:08 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.424 11:09:08 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.424 11:09:08 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.424 11:09:08 json_config -- scripts/common.sh@368 -- # return 0 00:06:01.424 11:09:08 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.424 11:09:08 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:01.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.424 --rc genhtml_branch_coverage=1 00:06:01.424 --rc genhtml_function_coverage=1 00:06:01.424 --rc genhtml_legend=1 00:06:01.424 --rc geninfo_all_blocks=1 00:06:01.424 --rc geninfo_unexecuted_blocks=1 00:06:01.424 00:06:01.424 ' 00:06:01.424 11:09:08 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:01.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.424 --rc genhtml_branch_coverage=1 00:06:01.424 --rc genhtml_function_coverage=1 00:06:01.424 --rc genhtml_legend=1 00:06:01.424 --rc geninfo_all_blocks=1 00:06:01.424 --rc geninfo_unexecuted_blocks=1 00:06:01.424 00:06:01.424 ' 00:06:01.424 11:09:08 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:01.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.424 --rc genhtml_branch_coverage=1 00:06:01.424 --rc genhtml_function_coverage=1 00:06:01.424 --rc genhtml_legend=1 00:06:01.424 --rc geninfo_all_blocks=1 00:06:01.424 --rc geninfo_unexecuted_blocks=1 00:06:01.424 00:06:01.424 ' 00:06:01.424 11:09:08 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:01.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.424 --rc genhtml_branch_coverage=1 00:06:01.424 --rc genhtml_function_coverage=1 00:06:01.424 --rc genhtml_legend=1 00:06:01.424 --rc geninfo_all_blocks=1 00:06:01.424 --rc geninfo_unexecuted_blocks=1 00:06:01.424 00:06:01.424 ' 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:01.424 11:09:08 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:01.424 11:09:08 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.424 11:09:08 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.424 11:09:08 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.424 11:09:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.424 11:09:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.424 11:09:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.424 11:09:08 json_config -- paths/export.sh@5 -- # export PATH 00:06:01.424 11:09:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@51 -- # : 0 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:01.424 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:01.424 11:09:08 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:01.424 INFO: JSON configuration test init 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:01.424 11:09:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.424 11:09:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:01.424 11:09:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.424 11:09:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.424 11:09:08 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:01.424 11:09:08 json_config -- json_config/common.sh@9 -- # local app=target 00:06:01.424 11:09:08 json_config -- json_config/common.sh@10 -- # shift 00:06:01.424 11:09:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:01.424 11:09:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:01.424 11:09:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:01.424 11:09:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.424 11:09:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.424 11:09:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58166 00:06:01.424 Waiting for target to run... 00:06:01.424 11:09:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:01.424 11:09:08 json_config -- json_config/common.sh@25 -- # waitforlisten 58166 /var/tmp/spdk_tgt.sock 00:06:01.424 11:09:08 json_config -- common/autotest_common.sh@835 -- # '[' -z 58166 ']' 00:06:01.424 11:09:08 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:01.424 11:09:08 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:01.424 11:09:08 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:01.424 11:09:08 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:01.424 11:09:08 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.424 11:09:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.683 [2024-12-10 11:09:08.305169] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:01.683 [2024-12-10 11:09:08.305374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58166 ] 00:06:01.942 [2024-12-10 11:09:08.657678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.942 [2024-12-10 11:09:08.739736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.509 11:09:09 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.509 00:06:02.509 11:09:09 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:02.509 11:09:09 json_config -- json_config/common.sh@26 -- # echo '' 00:06:02.509 11:09:09 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:02.509 11:09:09 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:02.509 11:09:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:02.509 11:09:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.509 11:09:09 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:02.509 11:09:09 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:02.509 11:09:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:02.509 11:09:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.509 11:09:09 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:02.509 11:09:09 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:02.509 11:09:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:03.076 [2024-12-10 11:09:09.747118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.643 11:09:10 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:03.643 11:09:10 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:03.643 11:09:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.643 11:09:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.643 11:09:10 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:03.643 11:09:10 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:03.643 11:09:10 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:03.644 11:09:10 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:03.644 11:09:10 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:03.644 11:09:10 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:03.644 11:09:10 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:03.644 11:09:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@54 -- # sort 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:03.903 11:09:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:03.903 11:09:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:03.903 11:09:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.903 11:09:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:03.903 11:09:10 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.903 11:09:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:04.470 MallocForNvmf0 00:06:04.470 11:09:11 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:04.470 11:09:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:04.470 MallocForNvmf1 00:06:04.470 11:09:11 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:04.470 11:09:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:04.728 [2024-12-10 11:09:11.469275] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.728 11:09:11 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:04.728 11:09:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:04.987 11:09:11 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:04.987 11:09:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:05.246 11:09:11 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:05.246 11:09:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:05.504 11:09:12 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:05.504 11:09:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:05.763 [2024-12-10 11:09:12.486247] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:05.763 11:09:12 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:05.763 11:09:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.763 11:09:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.763 11:09:12 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:05.763 11:09:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.763 11:09:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.763 11:09:12 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:05.763 11:09:12 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:05.763 11:09:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:06.330 MallocBdevForConfigChangeCheck 00:06:06.330 11:09:12 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:06.330 11:09:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:06.330 11:09:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.330 11:09:12 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:06.330 11:09:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:06.588 INFO: shutting down applications... 00:06:06.588 11:09:13 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:06.588 11:09:13 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:06.588 11:09:13 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:06.588 11:09:13 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:06.588 11:09:13 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:06.847 Calling clear_iscsi_subsystem 00:06:06.847 Calling clear_nvmf_subsystem 00:06:06.847 Calling clear_nbd_subsystem 00:06:06.847 Calling clear_ublk_subsystem 00:06:06.847 Calling clear_vhost_blk_subsystem 00:06:06.847 Calling clear_vhost_scsi_subsystem 00:06:06.847 Calling clear_bdev_subsystem 00:06:07.106 11:09:13 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:07.106 11:09:13 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:07.106 11:09:13 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:07.106 11:09:13 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.106 11:09:13 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:07.106 11:09:13 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:07.365 11:09:14 json_config -- json_config/json_config.sh@352 -- # break 00:06:07.365 11:09:14 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:07.365 11:09:14 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:07.365 11:09:14 json_config -- json_config/common.sh@31 -- # local app=target 00:06:07.365 11:09:14 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:07.365 11:09:14 json_config -- json_config/common.sh@35 -- # [[ -n 58166 ]] 00:06:07.365 11:09:14 json_config -- json_config/common.sh@38 -- # kill -SIGINT 58166 00:06:07.365 11:09:14 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:07.365 11:09:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.365 11:09:14 json_config -- json_config/common.sh@41 -- # kill -0 58166 00:06:07.365 11:09:14 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:07.932 11:09:14 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:07.932 11:09:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.932 11:09:14 json_config -- json_config/common.sh@41 -- # kill -0 58166 00:06:07.932 11:09:14 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.499 11:09:15 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.500 11:09:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.500 11:09:15 json_config -- json_config/common.sh@41 -- # kill -0 58166 00:06:08.500 11:09:15 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:08.500 11:09:15 json_config -- json_config/common.sh@43 -- # break 00:06:08.500 11:09:15 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:08.500 SPDK target shutdown done 00:06:08.500 11:09:15 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:08.500 INFO: relaunching applications... 00:06:08.500 11:09:15 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:08.500 11:09:15 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:08.500 11:09:15 json_config -- json_config/common.sh@9 -- # local app=target 00:06:08.500 11:09:15 json_config -- json_config/common.sh@10 -- # shift 00:06:08.500 11:09:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.500 11:09:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.500 11:09:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.500 11:09:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.500 11:09:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.500 11:09:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58380 00:06:08.500 Waiting for target to run... 00:06:08.500 11:09:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.500 11:09:15 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:08.500 11:09:15 json_config -- json_config/common.sh@25 -- # waitforlisten 58380 /var/tmp/spdk_tgt.sock 00:06:08.500 11:09:15 json_config -- common/autotest_common.sh@835 -- # '[' -z 58380 ']' 00:06:08.500 11:09:15 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.500 11:09:15 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.500 11:09:15 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.500 11:09:15 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.500 11:09:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.500 [2024-12-10 11:09:15.283127] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:08.500 [2024-12-10 11:09:15.283295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58380 ] 00:06:09.070 [2024-12-10 11:09:15.623120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.070 [2024-12-10 11:09:15.707138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.329 [2024-12-10 11:09:16.019649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.898 [2024-12-10 11:09:16.613359] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.898 [2024-12-10 11:09:16.645528] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:09.898 11:09:16 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.898 11:09:16 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:09.898 00:06:09.898 11:09:16 json_config -- json_config/common.sh@26 -- # echo '' 00:06:09.898 11:09:16 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:09.898 INFO: Checking if target configuration is the same... 00:06:09.898 11:09:16 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:09.898 11:09:16 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:09.898 11:09:16 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:09.898 11:09:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.898 + '[' 2 -ne 2 ']' 00:06:09.898 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:09.898 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:09.898 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:09.898 +++ basename /dev/fd/62 00:06:09.898 ++ mktemp /tmp/62.XXX 00:06:09.898 + tmp_file_1=/tmp/62.lue 00:06:09.898 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:09.898 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:09.898 + tmp_file_2=/tmp/spdk_tgt_config.json.cq5 00:06:09.898 + ret=0 00:06:09.898 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:10.467 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:10.467 + diff -u /tmp/62.lue /tmp/spdk_tgt_config.json.cq5 00:06:10.467 + echo 'INFO: JSON config files are the same' 00:06:10.467 INFO: JSON config files are the same 00:06:10.467 + rm /tmp/62.lue /tmp/spdk_tgt_config.json.cq5 00:06:10.467 + exit 0 00:06:10.467 11:09:17 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:10.467 INFO: changing configuration and checking if this can be detected... 00:06:10.467 11:09:17 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:10.467 11:09:17 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:10.467 11:09:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:10.726 11:09:17 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:10.726 11:09:17 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:10.726 11:09:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.726 + '[' 2 -ne 2 ']' 00:06:10.726 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:10.726 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:10.726 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:10.726 +++ basename /dev/fd/62 00:06:10.726 ++ mktemp /tmp/62.XXX 00:06:10.726 + tmp_file_1=/tmp/62.IH7 00:06:10.726 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:10.726 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:10.726 + tmp_file_2=/tmp/spdk_tgt_config.json.H8F 00:06:10.726 + ret=0 00:06:10.726 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:11.296 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:11.296 + diff -u /tmp/62.IH7 /tmp/spdk_tgt_config.json.H8F 00:06:11.296 + ret=1 00:06:11.297 + echo '=== Start of file: /tmp/62.IH7 ===' 00:06:11.297 + cat /tmp/62.IH7 00:06:11.297 + echo '=== End of file: /tmp/62.IH7 ===' 00:06:11.297 + echo '' 00:06:11.297 + echo '=== Start of file: /tmp/spdk_tgt_config.json.H8F ===' 00:06:11.297 + cat /tmp/spdk_tgt_config.json.H8F 00:06:11.297 + echo '=== End of file: /tmp/spdk_tgt_config.json.H8F ===' 00:06:11.297 + echo '' 00:06:11.297 + rm /tmp/62.IH7 /tmp/spdk_tgt_config.json.H8F 00:06:11.297 + exit 1 00:06:11.297 INFO: configuration change detected. 00:06:11.297 11:09:17 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:11.297 11:09:17 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:11.297 11:09:17 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:11.297 11:09:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:11.297 11:09:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.297 11:09:17 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:11.297 11:09:17 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:11.297 11:09:17 json_config -- json_config/json_config.sh@324 -- # [[ -n 58380 ]] 00:06:11.297 11:09:17 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:11.297 11:09:17 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:11.297 11:09:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:11.297 11:09:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.297 11:09:17 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:11.297 11:09:17 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:11.297 11:09:17 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:11.297 11:09:17 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:11.297 11:09:17 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:11.297 11:09:17 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:11.297 11:09:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:11.297 11:09:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.297 11:09:17 json_config -- json_config/json_config.sh@330 -- # killprocess 58380 00:06:11.297 11:09:17 json_config -- common/autotest_common.sh@954 -- # '[' -z 58380 ']' 00:06:11.297 11:09:17 json_config -- common/autotest_common.sh@958 -- # kill -0 58380 00:06:11.297 11:09:17 json_config -- common/autotest_common.sh@959 -- # uname 00:06:11.297 11:09:18 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.297 11:09:18 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58380 00:06:11.297 11:09:18 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.297 killing process with pid 58380 00:06:11.297 11:09:18 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.297 11:09:18 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58380' 00:06:11.297 11:09:18 json_config -- common/autotest_common.sh@973 -- # kill 58380 00:06:11.297 11:09:18 json_config -- common/autotest_common.sh@978 -- # wait 58380 00:06:12.279 11:09:18 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:12.279 11:09:18 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:12.279 11:09:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:12.279 11:09:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.279 11:09:18 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:12.279 INFO: Success 00:06:12.279 11:09:18 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:12.279 00:06:12.279 real 0m10.938s 00:06:12.279 user 0m14.887s 00:06:12.279 sys 0m1.729s 00:06:12.279 11:09:18 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.279 ************************************ 00:06:12.279 END TEST json_config 00:06:12.279 ************************************ 00:06:12.279 11:09:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.279 11:09:18 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:12.279 11:09:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.279 11:09:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.279 11:09:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.279 ************************************ 00:06:12.279 START TEST json_config_extra_key 00:06:12.279 ************************************ 00:06:12.279 11:09:18 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:12.279 11:09:19 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.279 11:09:19 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.279 11:09:19 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.539 11:09:19 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.539 11:09:19 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.539 11:09:19 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:12.540 11:09:19 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.540 11:09:19 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.540 --rc genhtml_branch_coverage=1 00:06:12.540 --rc genhtml_function_coverage=1 00:06:12.540 --rc genhtml_legend=1 00:06:12.540 --rc geninfo_all_blocks=1 00:06:12.540 --rc geninfo_unexecuted_blocks=1 00:06:12.540 00:06:12.540 ' 00:06:12.540 11:09:19 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.540 --rc genhtml_branch_coverage=1 00:06:12.540 --rc genhtml_function_coverage=1 00:06:12.540 --rc genhtml_legend=1 00:06:12.540 --rc geninfo_all_blocks=1 00:06:12.540 --rc geninfo_unexecuted_blocks=1 00:06:12.540 00:06:12.540 ' 00:06:12.540 11:09:19 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.540 --rc genhtml_branch_coverage=1 00:06:12.540 --rc genhtml_function_coverage=1 00:06:12.540 --rc genhtml_legend=1 00:06:12.540 --rc geninfo_all_blocks=1 00:06:12.540 --rc geninfo_unexecuted_blocks=1 00:06:12.540 00:06:12.540 ' 00:06:12.540 11:09:19 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.540 --rc genhtml_branch_coverage=1 00:06:12.540 --rc genhtml_function_coverage=1 00:06:12.540 --rc genhtml_legend=1 00:06:12.540 --rc geninfo_all_blocks=1 00:06:12.540 --rc geninfo_unexecuted_blocks=1 00:06:12.540 00:06:12.540 ' 00:06:12.540 11:09:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.540 11:09:19 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.540 11:09:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.540 11:09:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.540 11:09:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.540 11:09:19 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:12.540 11:09:19 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:12.540 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:12.540 11:09:19 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:12.540 11:09:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:12.540 11:09:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:12.540 11:09:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:12.540 11:09:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:12.540 11:09:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:12.540 11:09:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:12.540 11:09:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:12.540 11:09:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:12.540 11:09:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:12.540 11:09:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:12.540 INFO: launching applications... 00:06:12.540 11:09:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:12.540 11:09:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:12.541 11:09:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:12.541 11:09:19 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:12.541 11:09:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:12.541 11:09:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:12.541 11:09:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:12.541 11:09:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.541 11:09:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.541 11:09:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58541 00:06:12.541 Waiting for target to run... 00:06:12.541 11:09:19 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:12.541 11:09:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:12.541 11:09:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58541 /var/tmp/spdk_tgt.sock 00:06:12.541 11:09:19 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58541 ']' 00:06:12.541 11:09:19 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:12.541 11:09:19 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:12.541 11:09:19 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:12.541 11:09:19 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.541 11:09:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:12.541 [2024-12-10 11:09:19.283132] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:12.541 [2024-12-10 11:09:19.283321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58541 ] 00:06:13.109 [2024-12-10 11:09:19.646800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.109 [2024-12-10 11:09:19.736530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.109 [2024-12-10 11:09:19.928765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.677 11:09:20 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.677 11:09:20 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:13.677 00:06:13.677 11:09:20 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:13.677 INFO: shutting down applications... 00:06:13.677 11:09:20 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:13.677 11:09:20 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:13.677 11:09:20 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:13.677 11:09:20 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:13.677 11:09:20 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58541 ]] 00:06:13.677 11:09:20 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58541 00:06:13.677 11:09:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:13.677 11:09:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.677 11:09:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58541 00:06:13.677 11:09:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:14.245 11:09:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:14.245 11:09:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.245 11:09:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58541 00:06:14.245 11:09:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:14.813 11:09:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:14.813 11:09:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.813 11:09:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58541 00:06:14.813 11:09:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.072 11:09:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.072 11:09:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.072 11:09:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58541 00:06:15.072 11:09:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.640 11:09:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.640 11:09:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.640 11:09:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58541 00:06:15.640 11:09:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:16.222 11:09:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:16.222 11:09:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.222 11:09:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58541 00:06:16.222 11:09:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:16.223 11:09:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:16.223 11:09:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:16.223 SPDK target shutdown done 00:06:16.223 11:09:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:16.223 Success 00:06:16.223 11:09:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:16.223 00:06:16.223 real 0m3.919s 00:06:16.223 user 0m3.591s 00:06:16.223 sys 0m0.494s 00:06:16.223 11:09:22 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.223 11:09:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:16.223 ************************************ 00:06:16.223 END TEST json_config_extra_key 00:06:16.223 ************************************ 00:06:16.223 11:09:22 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:16.223 11:09:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.223 11:09:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.223 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:06:16.223 ************************************ 00:06:16.223 START TEST alias_rpc 00:06:16.223 ************************************ 00:06:16.223 11:09:22 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:16.223 * Looking for test storage... 00:06:16.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:16.223 11:09:23 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.223 11:09:23 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.223 11:09:23 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.482 11:09:23 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.482 11:09:23 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:16.482 11:09:23 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.482 11:09:23 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.482 --rc genhtml_branch_coverage=1 00:06:16.482 --rc genhtml_function_coverage=1 00:06:16.482 --rc genhtml_legend=1 00:06:16.482 --rc geninfo_all_blocks=1 00:06:16.482 --rc geninfo_unexecuted_blocks=1 00:06:16.482 00:06:16.482 ' 00:06:16.482 11:09:23 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.482 --rc genhtml_branch_coverage=1 00:06:16.482 --rc genhtml_function_coverage=1 00:06:16.482 --rc genhtml_legend=1 00:06:16.482 --rc geninfo_all_blocks=1 00:06:16.482 --rc geninfo_unexecuted_blocks=1 00:06:16.482 00:06:16.482 ' 00:06:16.482 11:09:23 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.482 --rc genhtml_branch_coverage=1 00:06:16.482 --rc genhtml_function_coverage=1 00:06:16.482 --rc genhtml_legend=1 00:06:16.482 --rc geninfo_all_blocks=1 00:06:16.482 --rc geninfo_unexecuted_blocks=1 00:06:16.482 00:06:16.482 ' 00:06:16.482 11:09:23 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.482 --rc genhtml_branch_coverage=1 00:06:16.482 --rc genhtml_function_coverage=1 00:06:16.482 --rc genhtml_legend=1 00:06:16.482 --rc geninfo_all_blocks=1 00:06:16.482 --rc geninfo_unexecuted_blocks=1 00:06:16.482 00:06:16.482 ' 00:06:16.482 11:09:23 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:16.482 11:09:23 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58640 00:06:16.482 11:09:23 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58640 00:06:16.482 11:09:23 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:16.482 11:09:23 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58640 ']' 00:06:16.482 11:09:23 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.482 11:09:23 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.482 11:09:23 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.482 11:09:23 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.482 11:09:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.482 [2024-12-10 11:09:23.299043] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:16.482 [2024-12-10 11:09:23.299231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58640 ] 00:06:16.741 [2024-12-10 11:09:23.487594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.000 [2024-12-10 11:09:23.614414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.259 [2024-12-10 11:09:23.836166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.826 11:09:24 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.826 11:09:24 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.827 11:09:24 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:18.086 11:09:24 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58640 00:06:18.086 11:09:24 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58640 ']' 00:06:18.086 11:09:24 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58640 00:06:18.086 11:09:24 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:18.086 11:09:24 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.086 11:09:24 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58640 00:06:18.086 11:09:24 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.086 killing process with pid 58640 00:06:18.086 11:09:24 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.086 11:09:24 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58640' 00:06:18.086 11:09:24 alias_rpc -- common/autotest_common.sh@973 -- # kill 58640 00:06:18.086 11:09:24 alias_rpc -- common/autotest_common.sh@978 -- # wait 58640 00:06:19.990 ************************************ 00:06:19.990 END TEST alias_rpc 00:06:19.990 ************************************ 00:06:19.990 00:06:19.990 real 0m3.748s 00:06:19.990 user 0m4.025s 00:06:19.990 sys 0m0.517s 00:06:19.990 11:09:26 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.990 11:09:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.990 11:09:26 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:19.990 11:09:26 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:19.990 11:09:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.990 11:09:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.990 11:09:26 -- common/autotest_common.sh@10 -- # set +x 00:06:19.990 ************************************ 00:06:19.990 START TEST spdkcli_tcp 00:06:19.990 ************************************ 00:06:19.990 11:09:26 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:19.990 * Looking for test storage... 00:06:20.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.249 11:09:26 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.249 --rc genhtml_branch_coverage=1 00:06:20.249 --rc genhtml_function_coverage=1 00:06:20.249 --rc genhtml_legend=1 00:06:20.249 --rc geninfo_all_blocks=1 00:06:20.249 --rc geninfo_unexecuted_blocks=1 00:06:20.249 00:06:20.249 ' 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.249 --rc genhtml_branch_coverage=1 00:06:20.249 --rc genhtml_function_coverage=1 00:06:20.249 --rc genhtml_legend=1 00:06:20.249 --rc geninfo_all_blocks=1 00:06:20.249 --rc geninfo_unexecuted_blocks=1 00:06:20.249 00:06:20.249 ' 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.249 --rc genhtml_branch_coverage=1 00:06:20.249 --rc genhtml_function_coverage=1 00:06:20.249 --rc genhtml_legend=1 00:06:20.249 --rc geninfo_all_blocks=1 00:06:20.249 --rc geninfo_unexecuted_blocks=1 00:06:20.249 00:06:20.249 ' 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.249 --rc genhtml_branch_coverage=1 00:06:20.249 --rc genhtml_function_coverage=1 00:06:20.249 --rc genhtml_legend=1 00:06:20.249 --rc geninfo_all_blocks=1 00:06:20.249 --rc geninfo_unexecuted_blocks=1 00:06:20.249 00:06:20.249 ' 00:06:20.249 11:09:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:20.249 11:09:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:20.249 11:09:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:20.249 11:09:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:20.249 11:09:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:20.249 11:09:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:20.249 11:09:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.249 11:09:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58747 00:06:20.249 11:09:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58747 00:06:20.249 11:09:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58747 ']' 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.249 11:09:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.249 [2024-12-10 11:09:27.066281] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:20.249 [2024-12-10 11:09:27.066517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58747 ] 00:06:20.508 [2024-12-10 11:09:27.252073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.767 [2024-12-10 11:09:27.363059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.767 [2024-12-10 11:09:27.363071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.767 [2024-12-10 11:09:27.574833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.334 11:09:28 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.334 11:09:28 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:21.334 11:09:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58764 00:06:21.334 11:09:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:21.334 11:09:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:21.594 [ 00:06:21.594 "bdev_malloc_delete", 00:06:21.594 "bdev_malloc_create", 00:06:21.594 "bdev_null_resize", 00:06:21.594 "bdev_null_delete", 00:06:21.594 "bdev_null_create", 00:06:21.594 "bdev_nvme_cuse_unregister", 00:06:21.594 "bdev_nvme_cuse_register", 00:06:21.594 "bdev_opal_new_user", 00:06:21.594 "bdev_opal_set_lock_state", 00:06:21.594 "bdev_opal_delete", 00:06:21.594 "bdev_opal_get_info", 00:06:21.594 "bdev_opal_create", 00:06:21.594 "bdev_nvme_opal_revert", 00:06:21.594 "bdev_nvme_opal_init", 00:06:21.594 "bdev_nvme_send_cmd", 00:06:21.594 "bdev_nvme_set_keys", 00:06:21.594 "bdev_nvme_get_path_iostat", 00:06:21.594 "bdev_nvme_get_mdns_discovery_info", 00:06:21.594 "bdev_nvme_stop_mdns_discovery", 00:06:21.594 "bdev_nvme_start_mdns_discovery", 00:06:21.594 "bdev_nvme_set_multipath_policy", 00:06:21.594 "bdev_nvme_set_preferred_path", 00:06:21.594 "bdev_nvme_get_io_paths", 00:06:21.594 "bdev_nvme_remove_error_injection", 00:06:21.594 "bdev_nvme_add_error_injection", 00:06:21.594 "bdev_nvme_get_discovery_info", 00:06:21.594 "bdev_nvme_stop_discovery", 00:06:21.594 "bdev_nvme_start_discovery", 00:06:21.594 "bdev_nvme_get_controller_health_info", 00:06:21.594 "bdev_nvme_disable_controller", 00:06:21.594 "bdev_nvme_enable_controller", 00:06:21.594 "bdev_nvme_reset_controller", 00:06:21.594 "bdev_nvme_get_transport_statistics", 00:06:21.594 "bdev_nvme_apply_firmware", 00:06:21.594 "bdev_nvme_detach_controller", 00:06:21.594 "bdev_nvme_get_controllers", 00:06:21.594 "bdev_nvme_attach_controller", 00:06:21.594 "bdev_nvme_set_hotplug", 00:06:21.594 "bdev_nvme_set_options", 00:06:21.594 "bdev_passthru_delete", 00:06:21.594 "bdev_passthru_create", 00:06:21.594 "bdev_lvol_set_parent_bdev", 00:06:21.594 "bdev_lvol_set_parent", 00:06:21.594 "bdev_lvol_check_shallow_copy", 00:06:21.594 "bdev_lvol_start_shallow_copy", 00:06:21.594 "bdev_lvol_grow_lvstore", 00:06:21.594 "bdev_lvol_get_lvols", 00:06:21.594 "bdev_lvol_get_lvstores", 00:06:21.594 "bdev_lvol_delete", 00:06:21.594 "bdev_lvol_set_read_only", 00:06:21.594 "bdev_lvol_resize", 00:06:21.594 "bdev_lvol_decouple_parent", 00:06:21.594 "bdev_lvol_inflate", 00:06:21.594 "bdev_lvol_rename", 00:06:21.594 "bdev_lvol_clone_bdev", 00:06:21.594 "bdev_lvol_clone", 00:06:21.594 "bdev_lvol_snapshot", 00:06:21.594 "bdev_lvol_create", 00:06:21.594 "bdev_lvol_delete_lvstore", 00:06:21.594 "bdev_lvol_rename_lvstore", 00:06:21.594 "bdev_lvol_create_lvstore", 00:06:21.594 "bdev_raid_set_options", 00:06:21.594 "bdev_raid_remove_base_bdev", 00:06:21.594 "bdev_raid_add_base_bdev", 00:06:21.594 "bdev_raid_delete", 00:06:21.594 "bdev_raid_create", 00:06:21.594 "bdev_raid_get_bdevs", 00:06:21.594 "bdev_error_inject_error", 00:06:21.594 "bdev_error_delete", 00:06:21.594 "bdev_error_create", 00:06:21.594 "bdev_split_delete", 00:06:21.594 "bdev_split_create", 00:06:21.594 "bdev_delay_delete", 00:06:21.594 "bdev_delay_create", 00:06:21.594 "bdev_delay_update_latency", 00:06:21.594 "bdev_zone_block_delete", 00:06:21.594 "bdev_zone_block_create", 00:06:21.594 "blobfs_create", 00:06:21.594 "blobfs_detect", 00:06:21.594 "blobfs_set_cache_size", 00:06:21.594 "bdev_aio_delete", 00:06:21.594 "bdev_aio_rescan", 00:06:21.594 "bdev_aio_create", 00:06:21.594 "bdev_ftl_set_property", 00:06:21.594 "bdev_ftl_get_properties", 00:06:21.595 "bdev_ftl_get_stats", 00:06:21.595 "bdev_ftl_unmap", 00:06:21.595 "bdev_ftl_unload", 00:06:21.595 "bdev_ftl_delete", 00:06:21.595 "bdev_ftl_load", 00:06:21.595 "bdev_ftl_create", 00:06:21.595 "bdev_virtio_attach_controller", 00:06:21.595 "bdev_virtio_scsi_get_devices", 00:06:21.595 "bdev_virtio_detach_controller", 00:06:21.595 "bdev_virtio_blk_set_hotplug", 00:06:21.595 "bdev_iscsi_delete", 00:06:21.595 "bdev_iscsi_create", 00:06:21.595 "bdev_iscsi_set_options", 00:06:21.595 "bdev_uring_delete", 00:06:21.595 "bdev_uring_rescan", 00:06:21.595 "bdev_uring_create", 00:06:21.595 "accel_error_inject_error", 00:06:21.595 "ioat_scan_accel_module", 00:06:21.595 "dsa_scan_accel_module", 00:06:21.595 "iaa_scan_accel_module", 00:06:21.595 "vfu_virtio_create_fs_endpoint", 00:06:21.595 "vfu_virtio_create_scsi_endpoint", 00:06:21.595 "vfu_virtio_scsi_remove_target", 00:06:21.595 "vfu_virtio_scsi_add_target", 00:06:21.595 "vfu_virtio_create_blk_endpoint", 00:06:21.595 "vfu_virtio_delete_endpoint", 00:06:21.595 "keyring_file_remove_key", 00:06:21.595 "keyring_file_add_key", 00:06:21.595 "keyring_linux_set_options", 00:06:21.595 "fsdev_aio_delete", 00:06:21.595 "fsdev_aio_create", 00:06:21.595 "iscsi_get_histogram", 00:06:21.595 "iscsi_enable_histogram", 00:06:21.595 "iscsi_set_options", 00:06:21.595 "iscsi_get_auth_groups", 00:06:21.595 "iscsi_auth_group_remove_secret", 00:06:21.595 "iscsi_auth_group_add_secret", 00:06:21.595 "iscsi_delete_auth_group", 00:06:21.595 "iscsi_create_auth_group", 00:06:21.595 "iscsi_set_discovery_auth", 00:06:21.595 "iscsi_get_options", 00:06:21.595 "iscsi_target_node_request_logout", 00:06:21.595 "iscsi_target_node_set_redirect", 00:06:21.595 "iscsi_target_node_set_auth", 00:06:21.595 "iscsi_target_node_add_lun", 00:06:21.595 "iscsi_get_stats", 00:06:21.595 "iscsi_get_connections", 00:06:21.595 "iscsi_portal_group_set_auth", 00:06:21.595 "iscsi_start_portal_group", 00:06:21.595 "iscsi_delete_portal_group", 00:06:21.595 "iscsi_create_portal_group", 00:06:21.595 "iscsi_get_portal_groups", 00:06:21.595 "iscsi_delete_target_node", 00:06:21.595 "iscsi_target_node_remove_pg_ig_maps", 00:06:21.595 "iscsi_target_node_add_pg_ig_maps", 00:06:21.595 "iscsi_create_target_node", 00:06:21.595 "iscsi_get_target_nodes", 00:06:21.595 "iscsi_delete_initiator_group", 00:06:21.595 "iscsi_initiator_group_remove_initiators", 00:06:21.595 "iscsi_initiator_group_add_initiators", 00:06:21.595 "iscsi_create_initiator_group", 00:06:21.595 "iscsi_get_initiator_groups", 00:06:21.595 "nvmf_set_crdt", 00:06:21.595 "nvmf_set_config", 00:06:21.595 "nvmf_set_max_subsystems", 00:06:21.595 "nvmf_stop_mdns_prr", 00:06:21.595 "nvmf_publish_mdns_prr", 00:06:21.595 "nvmf_subsystem_get_listeners", 00:06:21.595 "nvmf_subsystem_get_qpairs", 00:06:21.595 "nvmf_subsystem_get_controllers", 00:06:21.595 "nvmf_get_stats", 00:06:21.595 "nvmf_get_transports", 00:06:21.595 "nvmf_create_transport", 00:06:21.595 "nvmf_get_targets", 00:06:21.595 "nvmf_delete_target", 00:06:21.595 "nvmf_create_target", 00:06:21.595 "nvmf_subsystem_allow_any_host", 00:06:21.595 "nvmf_subsystem_set_keys", 00:06:21.595 "nvmf_subsystem_remove_host", 00:06:21.595 "nvmf_subsystem_add_host", 00:06:21.595 "nvmf_ns_remove_host", 00:06:21.595 "nvmf_ns_add_host", 00:06:21.595 "nvmf_subsystem_remove_ns", 00:06:21.595 "nvmf_subsystem_set_ns_ana_group", 00:06:21.595 "nvmf_subsystem_add_ns", 00:06:21.595 "nvmf_subsystem_listener_set_ana_state", 00:06:21.595 "nvmf_discovery_get_referrals", 00:06:21.595 "nvmf_discovery_remove_referral", 00:06:21.595 "nvmf_discovery_add_referral", 00:06:21.595 "nvmf_subsystem_remove_listener", 00:06:21.595 "nvmf_subsystem_add_listener", 00:06:21.595 "nvmf_delete_subsystem", 00:06:21.595 "nvmf_create_subsystem", 00:06:21.595 "nvmf_get_subsystems", 00:06:21.595 "env_dpdk_get_mem_stats", 00:06:21.595 "nbd_get_disks", 00:06:21.595 "nbd_stop_disk", 00:06:21.595 "nbd_start_disk", 00:06:21.595 "ublk_recover_disk", 00:06:21.595 "ublk_get_disks", 00:06:21.595 "ublk_stop_disk", 00:06:21.595 "ublk_start_disk", 00:06:21.595 "ublk_destroy_target", 00:06:21.595 "ublk_create_target", 00:06:21.595 "virtio_blk_create_transport", 00:06:21.595 "virtio_blk_get_transports", 00:06:21.595 "vhost_controller_set_coalescing", 00:06:21.595 "vhost_get_controllers", 00:06:21.595 "vhost_delete_controller", 00:06:21.595 "vhost_create_blk_controller", 00:06:21.595 "vhost_scsi_controller_remove_target", 00:06:21.595 "vhost_scsi_controller_add_target", 00:06:21.595 "vhost_start_scsi_controller", 00:06:21.595 "vhost_create_scsi_controller", 00:06:21.595 "thread_set_cpumask", 00:06:21.595 "scheduler_set_options", 00:06:21.595 "framework_get_governor", 00:06:21.595 "framework_get_scheduler", 00:06:21.595 "framework_set_scheduler", 00:06:21.595 "framework_get_reactors", 00:06:21.595 "thread_get_io_channels", 00:06:21.595 "thread_get_pollers", 00:06:21.595 "thread_get_stats", 00:06:21.595 "framework_monitor_context_switch", 00:06:21.595 "spdk_kill_instance", 00:06:21.595 "log_enable_timestamps", 00:06:21.595 "log_get_flags", 00:06:21.595 "log_clear_flag", 00:06:21.595 "log_set_flag", 00:06:21.595 "log_get_level", 00:06:21.595 "log_set_level", 00:06:21.595 "log_get_print_level", 00:06:21.595 "log_set_print_level", 00:06:21.595 "framework_enable_cpumask_locks", 00:06:21.595 "framework_disable_cpumask_locks", 00:06:21.595 "framework_wait_init", 00:06:21.595 "framework_start_init", 00:06:21.595 "scsi_get_devices", 00:06:21.595 "bdev_get_histogram", 00:06:21.595 "bdev_enable_histogram", 00:06:21.595 "bdev_set_qos_limit", 00:06:21.595 "bdev_set_qd_sampling_period", 00:06:21.595 "bdev_get_bdevs", 00:06:21.595 "bdev_reset_iostat", 00:06:21.595 "bdev_get_iostat", 00:06:21.595 "bdev_examine", 00:06:21.595 "bdev_wait_for_examine", 00:06:21.595 "bdev_set_options", 00:06:21.595 "accel_get_stats", 00:06:21.595 "accel_set_options", 00:06:21.595 "accel_set_driver", 00:06:21.595 "accel_crypto_key_destroy", 00:06:21.595 "accel_crypto_keys_get", 00:06:21.596 "accel_crypto_key_create", 00:06:21.596 "accel_assign_opc", 00:06:21.596 "accel_get_module_info", 00:06:21.596 "accel_get_opc_assignments", 00:06:21.596 "vmd_rescan", 00:06:21.596 "vmd_remove_device", 00:06:21.596 "vmd_enable", 00:06:21.596 "sock_get_default_impl", 00:06:21.596 "sock_set_default_impl", 00:06:21.596 "sock_impl_set_options", 00:06:21.596 "sock_impl_get_options", 00:06:21.596 "iobuf_get_stats", 00:06:21.596 "iobuf_set_options", 00:06:21.596 "keyring_get_keys", 00:06:21.596 "vfu_tgt_set_base_path", 00:06:21.596 "framework_get_pci_devices", 00:06:21.596 "framework_get_config", 00:06:21.596 "framework_get_subsystems", 00:06:21.596 "fsdev_set_opts", 00:06:21.596 "fsdev_get_opts", 00:06:21.596 "trace_get_info", 00:06:21.596 "trace_get_tpoint_group_mask", 00:06:21.596 "trace_disable_tpoint_group", 00:06:21.596 "trace_enable_tpoint_group", 00:06:21.596 "trace_clear_tpoint_mask", 00:06:21.596 "trace_set_tpoint_mask", 00:06:21.596 "notify_get_notifications", 00:06:21.596 "notify_get_types", 00:06:21.596 "spdk_get_version", 00:06:21.596 "rpc_get_methods" 00:06:21.596 ] 00:06:21.596 11:09:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:21.596 11:09:28 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:21.596 11:09:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.855 11:09:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:21.855 11:09:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58747 00:06:21.855 11:09:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58747 ']' 00:06:21.855 11:09:28 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58747 00:06:21.855 11:09:28 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:21.855 11:09:28 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.855 11:09:28 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58747 00:06:21.855 11:09:28 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.855 11:09:28 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.855 killing process with pid 58747 00:06:21.855 11:09:28 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58747' 00:06:21.855 11:09:28 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58747 00:06:21.855 11:09:28 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58747 00:06:23.759 ************************************ 00:06:23.759 END TEST spdkcli_tcp 00:06:23.759 ************************************ 00:06:23.759 00:06:23.759 real 0m3.738s 00:06:23.760 user 0m6.844s 00:06:23.760 sys 0m0.574s 00:06:23.760 11:09:30 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.760 11:09:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.760 11:09:30 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:23.760 11:09:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.760 11:09:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.760 11:09:30 -- common/autotest_common.sh@10 -- # set +x 00:06:23.760 ************************************ 00:06:23.760 START TEST dpdk_mem_utility 00:06:23.760 ************************************ 00:06:23.760 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:24.019 * Looking for test storage... 00:06:24.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:24.019 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.019 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.019 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.019 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.019 11:09:30 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:24.019 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.019 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.019 --rc genhtml_branch_coverage=1 00:06:24.019 --rc genhtml_function_coverage=1 00:06:24.019 --rc genhtml_legend=1 00:06:24.019 --rc geninfo_all_blocks=1 00:06:24.019 --rc geninfo_unexecuted_blocks=1 00:06:24.019 00:06:24.019 ' 00:06:24.019 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.019 --rc genhtml_branch_coverage=1 00:06:24.019 --rc genhtml_function_coverage=1 00:06:24.019 --rc genhtml_legend=1 00:06:24.019 --rc geninfo_all_blocks=1 00:06:24.019 --rc geninfo_unexecuted_blocks=1 00:06:24.019 00:06:24.019 ' 00:06:24.019 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.019 --rc genhtml_branch_coverage=1 00:06:24.019 --rc genhtml_function_coverage=1 00:06:24.019 --rc genhtml_legend=1 00:06:24.019 --rc geninfo_all_blocks=1 00:06:24.019 --rc geninfo_unexecuted_blocks=1 00:06:24.019 00:06:24.019 ' 00:06:24.019 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.019 --rc genhtml_branch_coverage=1 00:06:24.019 --rc genhtml_function_coverage=1 00:06:24.019 --rc genhtml_legend=1 00:06:24.019 --rc geninfo_all_blocks=1 00:06:24.019 --rc geninfo_unexecuted_blocks=1 00:06:24.019 00:06:24.019 ' 00:06:24.019 11:09:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:24.019 11:09:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58869 00:06:24.019 11:09:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58869 00:06:24.019 11:09:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.019 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58869 ']' 00:06:24.019 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.019 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.019 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.019 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.019 11:09:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:24.278 [2024-12-10 11:09:30.863003] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:24.278 [2024-12-10 11:09:30.863194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58869 ] 00:06:24.278 [2024-12-10 11:09:31.046847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.538 [2024-12-10 11:09:31.143336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.538 [2024-12-10 11:09:31.358685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.106 11:09:31 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.106 11:09:31 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:25.106 11:09:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:25.106 11:09:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:25.106 11:09:31 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.106 11:09:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.106 { 00:06:25.106 "filename": "/tmp/spdk_mem_dump.txt" 00:06:25.106 } 00:06:25.106 11:09:31 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.106 11:09:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:25.367 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:25.367 1 heaps totaling size 824.000000 MiB 00:06:25.367 size: 824.000000 MiB heap id: 0 00:06:25.367 end heaps---------- 00:06:25.367 9 mempools totaling size 603.782043 MiB 00:06:25.367 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:25.367 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:25.367 size: 100.555481 MiB name: bdev_io_58869 00:06:25.367 size: 50.003479 MiB name: msgpool_58869 00:06:25.367 size: 36.509338 MiB name: fsdev_io_58869 00:06:25.367 size: 21.763794 MiB name: PDU_Pool 00:06:25.367 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:25.367 size: 4.133484 MiB name: evtpool_58869 00:06:25.367 size: 0.026123 MiB name: Session_Pool 00:06:25.367 end mempools------- 00:06:25.367 6 memzones totaling size 4.142822 MiB 00:06:25.367 size: 1.000366 MiB name: RG_ring_0_58869 00:06:25.367 size: 1.000366 MiB name: RG_ring_1_58869 00:06:25.367 size: 1.000366 MiB name: RG_ring_4_58869 00:06:25.367 size: 1.000366 MiB name: RG_ring_5_58869 00:06:25.367 size: 0.125366 MiB name: RG_ring_2_58869 00:06:25.367 size: 0.015991 MiB name: RG_ring_3_58869 00:06:25.367 end memzones------- 00:06:25.367 11:09:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:25.367 heap id: 0 total size: 824.000000 MiB number of busy elements: 316 number of free elements: 18 00:06:25.367 list of free elements. size: 16.781128 MiB 00:06:25.367 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:25.367 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:25.367 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:25.367 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:25.367 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:25.367 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:25.367 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:25.367 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:25.367 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:25.367 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:25.367 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:25.367 element at address: 0x20001b400000 with size: 0.562683 MiB 00:06:25.367 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:25.367 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:25.367 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:25.367 element at address: 0x200012c00000 with size: 0.433228 MiB 00:06:25.368 element at address: 0x200028800000 with size: 0.390442 MiB 00:06:25.368 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:25.368 list of standard malloc elements. size: 199.287964 MiB 00:06:25.368 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:25.368 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:25.368 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:25.368 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:25.368 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:25.368 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:25.368 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:25.368 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:25.368 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:25.368 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:25.368 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:25.368 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:25.368 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:25.369 element at address: 0x200028863f40 with size: 0.000244 MiB 00:06:25.369 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886af80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:25.369 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:25.370 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:25.370 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:25.370 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:25.370 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:25.370 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:25.370 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:25.370 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:25.370 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:25.370 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:25.370 list of memzone associated elements. size: 607.930908 MiB 00:06:25.370 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:25.370 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:25.370 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:25.370 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:25.370 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:25.370 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58869_0 00:06:25.370 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:25.370 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58869_0 00:06:25.370 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:25.370 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58869_0 00:06:25.370 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:25.370 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:25.370 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:25.370 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:25.370 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:25.370 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58869_0 00:06:25.370 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:25.370 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58869 00:06:25.370 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:25.370 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58869 00:06:25.370 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:25.370 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:25.370 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:25.370 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:25.370 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:25.370 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:25.370 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:25.370 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:25.370 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:25.370 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58869 00:06:25.370 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:25.370 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58869 00:06:25.370 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:25.370 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58869 00:06:25.370 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:25.370 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58869 00:06:25.370 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:25.370 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58869 00:06:25.370 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:25.370 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58869 00:06:25.370 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:25.370 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:25.370 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:25.370 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:25.370 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:25.370 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:25.370 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:25.370 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58869 00:06:25.370 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:25.370 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58869 00:06:25.370 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:25.370 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:25.370 element at address: 0x200028864140 with size: 0.023804 MiB 00:06:25.370 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:25.370 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:25.370 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58869 00:06:25.370 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:06:25.370 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:25.370 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:25.370 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58869 00:06:25.370 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:25.370 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58869 00:06:25.370 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:25.370 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58869 00:06:25.370 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:06:25.370 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:25.370 11:09:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:25.370 11:09:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58869 00:06:25.370 11:09:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58869 ']' 00:06:25.370 11:09:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58869 00:06:25.370 11:09:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:25.370 11:09:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.370 11:09:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58869 00:06:25.370 11:09:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.370 11:09:32 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.370 killing process with pid 58869 00:06:25.370 11:09:32 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58869' 00:06:25.370 11:09:32 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58869 00:06:25.370 11:09:32 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58869 00:06:27.277 00:06:27.277 real 0m3.516s 00:06:27.277 user 0m3.666s 00:06:27.277 sys 0m0.530s 00:06:27.277 11:09:34 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.277 ************************************ 00:06:27.277 END TEST dpdk_mem_utility 00:06:27.277 ************************************ 00:06:27.277 11:09:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:27.277 11:09:34 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:27.277 11:09:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.277 11:09:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.277 11:09:34 -- common/autotest_common.sh@10 -- # set +x 00:06:27.537 ************************************ 00:06:27.537 START TEST event 00:06:27.537 ************************************ 00:06:27.537 11:09:34 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:27.537 * Looking for test storage... 00:06:27.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:27.537 11:09:34 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:27.537 11:09:34 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:27.537 11:09:34 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:27.537 11:09:34 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:27.537 11:09:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.537 11:09:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.537 11:09:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.537 11:09:34 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.537 11:09:34 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.537 11:09:34 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.537 11:09:34 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.537 11:09:34 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.537 11:09:34 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.537 11:09:34 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.537 11:09:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.537 11:09:34 event -- scripts/common.sh@344 -- # case "$op" in 00:06:27.537 11:09:34 event -- scripts/common.sh@345 -- # : 1 00:06:27.537 11:09:34 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.537 11:09:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.537 11:09:34 event -- scripts/common.sh@365 -- # decimal 1 00:06:27.537 11:09:34 event -- scripts/common.sh@353 -- # local d=1 00:06:27.537 11:09:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.537 11:09:34 event -- scripts/common.sh@355 -- # echo 1 00:06:27.537 11:09:34 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.537 11:09:34 event -- scripts/common.sh@366 -- # decimal 2 00:06:27.537 11:09:34 event -- scripts/common.sh@353 -- # local d=2 00:06:27.537 11:09:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.537 11:09:34 event -- scripts/common.sh@355 -- # echo 2 00:06:27.537 11:09:34 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.537 11:09:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.537 11:09:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.537 11:09:34 event -- scripts/common.sh@368 -- # return 0 00:06:27.537 11:09:34 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.537 11:09:34 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:27.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.537 --rc genhtml_branch_coverage=1 00:06:27.537 --rc genhtml_function_coverage=1 00:06:27.537 --rc genhtml_legend=1 00:06:27.537 --rc geninfo_all_blocks=1 00:06:27.537 --rc geninfo_unexecuted_blocks=1 00:06:27.537 00:06:27.537 ' 00:06:27.537 11:09:34 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:27.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.537 --rc genhtml_branch_coverage=1 00:06:27.537 --rc genhtml_function_coverage=1 00:06:27.537 --rc genhtml_legend=1 00:06:27.537 --rc geninfo_all_blocks=1 00:06:27.537 --rc geninfo_unexecuted_blocks=1 00:06:27.537 00:06:27.537 ' 00:06:27.537 11:09:34 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:27.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.537 --rc genhtml_branch_coverage=1 00:06:27.537 --rc genhtml_function_coverage=1 00:06:27.537 --rc genhtml_legend=1 00:06:27.537 --rc geninfo_all_blocks=1 00:06:27.537 --rc geninfo_unexecuted_blocks=1 00:06:27.537 00:06:27.537 ' 00:06:27.537 11:09:34 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:27.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.537 --rc genhtml_branch_coverage=1 00:06:27.537 --rc genhtml_function_coverage=1 00:06:27.537 --rc genhtml_legend=1 00:06:27.537 --rc geninfo_all_blocks=1 00:06:27.537 --rc geninfo_unexecuted_blocks=1 00:06:27.537 00:06:27.537 ' 00:06:27.537 11:09:34 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:27.537 11:09:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:27.537 11:09:34 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:27.537 11:09:34 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:27.537 11:09:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.537 11:09:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.537 ************************************ 00:06:27.537 START TEST event_perf 00:06:27.537 ************************************ 00:06:27.537 11:09:34 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:27.537 Running I/O for 1 seconds...[2024-12-10 11:09:34.358796] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:27.537 [2024-12-10 11:09:34.358994] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58966 ] 00:06:27.797 [2024-12-10 11:09:34.551695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.056 [2024-12-10 11:09:34.685341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.056 [2024-12-10 11:09:34.685479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.056 [2024-12-10 11:09:34.686227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.056 [2024-12-10 11:09:34.686238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.436 Running I/O for 1 seconds... 00:06:29.436 lcore 0: 177157 00:06:29.436 lcore 1: 177157 00:06:29.436 lcore 2: 177159 00:06:29.436 lcore 3: 177158 00:06:29.436 done. 00:06:29.436 00:06:29.436 real 0m1.602s 00:06:29.436 user 0m4.359s 00:06:29.436 sys 0m0.112s 00:06:29.436 11:09:35 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.436 11:09:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.436 ************************************ 00:06:29.436 END TEST event_perf 00:06:29.436 ************************************ 00:06:29.436 11:09:35 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:29.436 11:09:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:29.436 11:09:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.436 11:09:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.436 ************************************ 00:06:29.436 START TEST event_reactor 00:06:29.436 ************************************ 00:06:29.436 11:09:35 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:29.436 [2024-12-10 11:09:36.001154] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:29.436 [2024-12-10 11:09:36.001317] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59011 ] 00:06:29.436 [2024-12-10 11:09:36.174293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.695 [2024-12-10 11:09:36.272900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.073 test_start 00:06:31.073 oneshot 00:06:31.073 tick 100 00:06:31.073 tick 100 00:06:31.073 tick 250 00:06:31.073 tick 100 00:06:31.073 tick 100 00:06:31.073 tick 100 00:06:31.073 tick 250 00:06:31.073 tick 500 00:06:31.073 tick 100 00:06:31.073 tick 100 00:06:31.073 tick 250 00:06:31.073 tick 100 00:06:31.073 tick 100 00:06:31.073 test_end 00:06:31.073 00:06:31.073 real 0m1.525s 00:06:31.073 user 0m1.332s 00:06:31.073 sys 0m0.083s 00:06:31.073 11:09:37 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.073 ************************************ 00:06:31.073 END TEST event_reactor 00:06:31.073 ************************************ 00:06:31.073 11:09:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:31.073 11:09:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:31.073 11:09:37 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:31.073 11:09:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.073 11:09:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.073 ************************************ 00:06:31.073 START TEST event_reactor_perf 00:06:31.073 ************************************ 00:06:31.073 11:09:37 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:31.073 [2024-12-10 11:09:37.591848] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:31.073 [2024-12-10 11:09:37.592020] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59042 ] 00:06:31.073 [2024-12-10 11:09:37.775851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.073 [2024-12-10 11:09:37.873103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.453 test_start 00:06:32.453 test_end 00:06:32.453 Performance: 295674 events per second 00:06:32.453 00:06:32.453 real 0m1.545s 00:06:32.453 user 0m1.340s 00:06:32.453 sys 0m0.091s 00:06:32.453 11:09:39 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.453 ************************************ 00:06:32.453 END TEST event_reactor_perf 00:06:32.453 ************************************ 00:06:32.453 11:09:39 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.453 11:09:39 event -- event/event.sh@49 -- # uname -s 00:06:32.453 11:09:39 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:32.453 11:09:39 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:32.453 11:09:39 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.453 11:09:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.453 11:09:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.453 ************************************ 00:06:32.453 START TEST event_scheduler 00:06:32.453 ************************************ 00:06:32.453 11:09:39 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:32.453 * Looking for test storage... 00:06:32.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:32.453 11:09:39 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:32.453 11:09:39 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:32.453 11:09:39 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:32.712 11:09:39 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.712 11:09:39 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:32.712 11:09:39 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.712 11:09:39 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:32.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.713 --rc genhtml_branch_coverage=1 00:06:32.713 --rc genhtml_function_coverage=1 00:06:32.713 --rc genhtml_legend=1 00:06:32.713 --rc geninfo_all_blocks=1 00:06:32.713 --rc geninfo_unexecuted_blocks=1 00:06:32.713 00:06:32.713 ' 00:06:32.713 11:09:39 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:32.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.713 --rc genhtml_branch_coverage=1 00:06:32.713 --rc genhtml_function_coverage=1 00:06:32.713 --rc genhtml_legend=1 00:06:32.713 --rc geninfo_all_blocks=1 00:06:32.713 --rc geninfo_unexecuted_blocks=1 00:06:32.713 00:06:32.713 ' 00:06:32.713 11:09:39 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:32.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.713 --rc genhtml_branch_coverage=1 00:06:32.713 --rc genhtml_function_coverage=1 00:06:32.713 --rc genhtml_legend=1 00:06:32.713 --rc geninfo_all_blocks=1 00:06:32.713 --rc geninfo_unexecuted_blocks=1 00:06:32.713 00:06:32.713 ' 00:06:32.713 11:09:39 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:32.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.713 --rc genhtml_branch_coverage=1 00:06:32.713 --rc genhtml_function_coverage=1 00:06:32.713 --rc genhtml_legend=1 00:06:32.713 --rc geninfo_all_blocks=1 00:06:32.713 --rc geninfo_unexecuted_blocks=1 00:06:32.713 00:06:32.713 ' 00:06:32.713 11:09:39 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:32.713 11:09:39 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59118 00:06:32.713 11:09:39 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.713 11:09:39 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59118 00:06:32.713 11:09:39 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:32.713 11:09:39 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59118 ']' 00:06:32.713 11:09:39 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.713 11:09:39 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.713 11:09:39 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.713 11:09:39 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.713 11:09:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:32.713 [2024-12-10 11:09:39.472091] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:32.713 [2024-12-10 11:09:39.472283] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59118 ] 00:06:32.972 [2024-12-10 11:09:39.662001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.972 [2024-12-10 11:09:39.767399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.972 [2024-12-10 11:09:39.767557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.972 [2024-12-10 11:09:39.767623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.972 [2024-12-10 11:09:39.767625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.909 11:09:40 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.909 11:09:40 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:33.909 11:09:40 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:33.909 11:09:40 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.909 11:09:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.909 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.909 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.909 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.909 POWER: Cannot set governor of lcore 0 to performance 00:06:33.909 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.909 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.909 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.909 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.909 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:33.909 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:33.909 POWER: Unable to set Power Management Environment for lcore 0 00:06:33.909 [2024-12-10 11:09:40.510981] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:33.909 [2024-12-10 11:09:40.511010] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:33.909 [2024-12-10 11:09:40.511026] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:33.909 [2024-12-10 11:09:40.511057] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:33.909 [2024-12-10 11:09:40.511071] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:33.909 [2024-12-10 11:09:40.511085] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:33.909 11:09:40 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.909 11:09:40 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:33.909 11:09:40 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.909 11:09:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.909 [2024-12-10 11:09:40.689108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.169 [2024-12-10 11:09:40.792317] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:34.169 11:09:40 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.169 11:09:40 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:34.169 11:09:40 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.169 11:09:40 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.169 11:09:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.169 ************************************ 00:06:34.169 START TEST scheduler_create_thread 00:06:34.169 ************************************ 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.169 2 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.169 3 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.169 4 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.169 5 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.169 6 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.169 7 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.169 8 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.169 9 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.169 10 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.169 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.170 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.170 11:09:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:34.170 11:09:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:34.170 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.170 11:09:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.546 11:09:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.546 ************************************ 00:06:35.546 END TEST scheduler_create_thread 00:06:35.546 ************************************ 00:06:35.546 00:06:35.546 real 0m1.174s 00:06:35.546 user 0m0.013s 00:06:35.546 sys 0m0.006s 00:06:35.546 11:09:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.546 11:09:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.546 11:09:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:35.546 11:09:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59118 00:06:35.546 11:09:42 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59118 ']' 00:06:35.546 11:09:42 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59118 00:06:35.546 11:09:42 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:35.546 11:09:42 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.546 11:09:42 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59118 00:06:35.546 killing process with pid 59118 00:06:35.546 11:09:42 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:35.546 11:09:42 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:35.546 11:09:42 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59118' 00:06:35.546 11:09:42 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59118 00:06:35.546 11:09:42 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59118 00:06:35.806 [2024-12-10 11:09:42.459309] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:36.740 ************************************ 00:06:36.740 END TEST event_scheduler 00:06:36.740 ************************************ 00:06:36.740 00:06:36.740 real 0m4.332s 00:06:36.740 user 0m7.474s 00:06:36.740 sys 0m0.475s 00:06:36.740 11:09:43 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.740 11:09:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:36.740 11:09:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:36.740 11:09:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:36.740 11:09:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.740 11:09:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.740 11:09:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.740 ************************************ 00:06:36.740 START TEST app_repeat 00:06:36.740 ************************************ 00:06:36.740 11:09:43 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:36.740 11:09:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.740 11:09:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.740 11:09:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:36.740 11:09:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.740 11:09:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:36.741 11:09:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:36.741 11:09:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:36.741 Process app_repeat pid: 59213 00:06:36.741 spdk_app_start Round 0 00:06:36.741 11:09:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59213 00:06:36.741 11:09:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.741 11:09:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59213' 00:06:36.741 11:09:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:36.741 11:09:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:36.741 11:09:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:36.741 11:09:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59213 /var/tmp/spdk-nbd.sock 00:06:36.741 11:09:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59213 ']' 00:06:36.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.741 11:09:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.741 11:09:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.741 11:09:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.741 11:09:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.741 11:09:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.999 [2024-12-10 11:09:43.615829] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:36.999 [2024-12-10 11:09:43.615998] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59213 ] 00:06:36.999 [2024-12-10 11:09:43.801886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.257 [2024-12-10 11:09:43.930872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.257 [2024-12-10 11:09:43.930887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.515 [2024-12-10 11:09:44.119103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.083 11:09:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.083 11:09:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:38.083 11:09:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.341 Malloc0 00:06:38.341 11:09:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.600 Malloc1 00:06:38.600 11:09:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.600 11:09:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.600 11:09:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.600 11:09:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.600 11:09:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.600 11:09:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.600 11:09:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.600 11:09:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.600 11:09:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.600 11:09:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.600 11:09:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.600 11:09:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.600 11:09:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.600 11:09:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.600 11:09:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.600 11:09:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.166 /dev/nbd0 00:06:39.166 11:09:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.166 11:09:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.166 11:09:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:39.166 11:09:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:39.166 11:09:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:39.166 11:09:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:39.166 11:09:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:39.166 11:09:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:39.166 11:09:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:39.166 11:09:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:39.166 11:09:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.166 1+0 records in 00:06:39.166 1+0 records out 00:06:39.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286288 s, 14.3 MB/s 00:06:39.166 11:09:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.166 11:09:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:39.166 11:09:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.166 11:09:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:39.166 11:09:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:39.166 11:09:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.166 11:09:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.166 11:09:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.166 /dev/nbd1 00:06:39.425 11:09:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.425 11:09:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.425 11:09:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:39.425 11:09:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:39.425 11:09:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:39.425 11:09:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:39.425 11:09:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:39.425 11:09:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:39.425 11:09:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:39.425 11:09:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:39.425 11:09:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.425 1+0 records in 00:06:39.425 1+0 records out 00:06:39.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363574 s, 11.3 MB/s 00:06:39.425 11:09:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.425 11:09:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:39.425 11:09:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.425 11:09:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:39.425 11:09:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:39.425 11:09:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.425 11:09:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.425 11:09:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.425 11:09:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.425 11:09:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.684 { 00:06:39.684 "nbd_device": "/dev/nbd0", 00:06:39.684 "bdev_name": "Malloc0" 00:06:39.684 }, 00:06:39.684 { 00:06:39.684 "nbd_device": "/dev/nbd1", 00:06:39.684 "bdev_name": "Malloc1" 00:06:39.684 } 00:06:39.684 ]' 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.684 { 00:06:39.684 "nbd_device": "/dev/nbd0", 00:06:39.684 "bdev_name": "Malloc0" 00:06:39.684 }, 00:06:39.684 { 00:06:39.684 "nbd_device": "/dev/nbd1", 00:06:39.684 "bdev_name": "Malloc1" 00:06:39.684 } 00:06:39.684 ]' 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.684 /dev/nbd1' 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.684 /dev/nbd1' 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.684 256+0 records in 00:06:39.684 256+0 records out 00:06:39.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00745334 s, 141 MB/s 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.684 256+0 records in 00:06:39.684 256+0 records out 00:06:39.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256451 s, 40.9 MB/s 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.684 256+0 records in 00:06:39.684 256+0 records out 00:06:39.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0327173 s, 32.0 MB/s 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.684 11:09:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.943 11:09:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.943 11:09:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.943 11:09:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.943 11:09:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.943 11:09:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.943 11:09:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.943 11:09:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.943 11:09:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.943 11:09:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.943 11:09:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.510 11:09:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.510 11:09:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.510 11:09:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.510 11:09:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.510 11:09:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.510 11:09:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.510 11:09:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.510 11:09:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.510 11:09:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.510 11:09:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.510 11:09:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.768 11:09:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.768 11:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.768 11:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.768 11:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.769 11:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.769 11:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.769 11:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:40.769 11:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.769 11:09:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.769 11:09:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.769 11:09:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.769 11:09:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.769 11:09:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.071 11:09:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:42.006 [2024-12-10 11:09:48.826801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.265 [2024-12-10 11:09:48.929548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.265 [2024-12-10 11:09:48.929555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.523 [2024-12-10 11:09:49.092522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.523 [2024-12-10 11:09:49.092741] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.523 [2024-12-10 11:09:49.092782] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:44.428 11:09:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:44.428 spdk_app_start Round 1 00:06:44.428 11:09:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:44.428 11:09:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59213 /var/tmp/spdk-nbd.sock 00:06:44.428 11:09:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59213 ']' 00:06:44.428 11:09:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.428 11:09:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.428 11:09:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.428 11:09:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.428 11:09:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.428 11:09:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.428 11:09:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:44.428 11:09:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.686 Malloc0 00:06:44.686 11:09:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.253 Malloc1 00:06:45.253 11:09:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.253 11:09:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.253 11:09:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.253 11:09:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.253 11:09:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.253 11:09:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.254 11:09:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.254 11:09:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.254 11:09:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.254 11:09:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.254 11:09:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.254 11:09:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.254 11:09:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.254 11:09:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.254 11:09:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.254 11:09:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.254 /dev/nbd0 00:06:45.254 11:09:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.512 11:09:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.512 11:09:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:45.512 11:09:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:45.512 11:09:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.512 11:09:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.512 11:09:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:45.512 11:09:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:45.512 11:09:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.512 11:09:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.512 11:09:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.512 1+0 records in 00:06:45.512 1+0 records out 00:06:45.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263585 s, 15.5 MB/s 00:06:45.512 11:09:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.512 11:09:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:45.512 11:09:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.512 11:09:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.512 11:09:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:45.512 11:09:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.512 11:09:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.512 11:09:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.771 /dev/nbd1 00:06:45.771 11:09:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.771 11:09:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.771 11:09:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:45.771 11:09:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:45.771 11:09:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.771 11:09:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.771 11:09:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:45.771 11:09:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:45.771 11:09:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.771 11:09:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.771 11:09:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.771 1+0 records in 00:06:45.771 1+0 records out 00:06:45.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353267 s, 11.6 MB/s 00:06:45.771 11:09:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.771 11:09:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:45.771 11:09:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.771 11:09:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.771 11:09:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:45.771 11:09:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.771 11:09:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.771 11:09:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.771 11:09:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.771 11:09:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.030 { 00:06:46.030 "nbd_device": "/dev/nbd0", 00:06:46.030 "bdev_name": "Malloc0" 00:06:46.030 }, 00:06:46.030 { 00:06:46.030 "nbd_device": "/dev/nbd1", 00:06:46.030 "bdev_name": "Malloc1" 00:06:46.030 } 00:06:46.030 ]' 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.030 { 00:06:46.030 "nbd_device": "/dev/nbd0", 00:06:46.030 "bdev_name": "Malloc0" 00:06:46.030 }, 00:06:46.030 { 00:06:46.030 "nbd_device": "/dev/nbd1", 00:06:46.030 "bdev_name": "Malloc1" 00:06:46.030 } 00:06:46.030 ]' 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.030 /dev/nbd1' 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.030 /dev/nbd1' 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.030 11:09:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.031 256+0 records in 00:06:46.031 256+0 records out 00:06:46.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00777712 s, 135 MB/s 00:06:46.031 11:09:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.031 11:09:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.289 256+0 records in 00:06:46.289 256+0 records out 00:06:46.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268339 s, 39.1 MB/s 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.289 256+0 records in 00:06:46.289 256+0 records out 00:06:46.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031054 s, 33.8 MB/s 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.289 11:09:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.548 11:09:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.548 11:09:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.548 11:09:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.548 11:09:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.548 11:09:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.548 11:09:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.548 11:09:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.548 11:09:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.548 11:09:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.548 11:09:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.806 11:09:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.806 11:09:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.806 11:09:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.806 11:09:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.806 11:09:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.806 11:09:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.806 11:09:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.806 11:09:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.806 11:09:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.806 11:09:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.806 11:09:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.065 11:09:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.065 11:09:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.065 11:09:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.065 11:09:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.065 11:09:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.065 11:09:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.065 11:09:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.065 11:09:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.065 11:09:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.065 11:09:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.065 11:09:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.065 11:09:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.065 11:09:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:47.631 11:09:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:48.566 [2024-12-10 11:09:55.246723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.566 [2024-12-10 11:09:55.338266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.566 [2024-12-10 11:09:55.338273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.824 [2024-12-10 11:09:55.500666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.824 [2024-12-10 11:09:55.500831] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:48.824 [2024-12-10 11:09:55.500851] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.727 spdk_app_start Round 2 00:06:50.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.727 11:09:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.727 11:09:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:50.727 11:09:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59213 /var/tmp/spdk-nbd.sock 00:06:50.727 11:09:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59213 ']' 00:06:50.727 11:09:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.727 11:09:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.727 11:09:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.727 11:09:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.727 11:09:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.986 11:09:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.986 11:09:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:50.986 11:09:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.244 Malloc0 00:06:51.244 11:09:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.503 Malloc1 00:06:51.503 11:09:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.503 11:09:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.503 11:09:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.503 11:09:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.503 11:09:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.503 11:09:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.503 11:09:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.503 11:09:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.503 11:09:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.503 11:09:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.503 11:09:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.503 11:09:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.503 11:09:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.503 11:09:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.503 11:09:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.503 11:09:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.762 /dev/nbd0 00:06:51.762 11:09:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.762 11:09:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.762 11:09:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:51.762 11:09:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:51.762 11:09:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.762 11:09:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.762 11:09:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:51.762 11:09:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:51.762 11:09:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.762 11:09:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.762 11:09:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.762 1+0 records in 00:06:51.762 1+0 records out 00:06:51.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191035 s, 21.4 MB/s 00:06:51.762 11:09:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.762 11:09:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:51.762 11:09:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.762 11:09:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.762 11:09:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:51.762 11:09:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.762 11:09:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.762 11:09:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.021 /dev/nbd1 00:06:52.280 11:09:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.280 11:09:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.280 11:09:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:52.280 11:09:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:52.280 11:09:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.280 11:09:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.280 11:09:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:52.280 11:09:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:52.280 11:09:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.280 11:09:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.280 11:09:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.280 1+0 records in 00:06:52.280 1+0 records out 00:06:52.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295392 s, 13.9 MB/s 00:06:52.280 11:09:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.280 11:09:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:52.280 11:09:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.280 11:09:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.280 11:09:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:52.280 11:09:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.280 11:09:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.280 11:09:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.280 11:09:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.280 11:09:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.539 { 00:06:52.539 "nbd_device": "/dev/nbd0", 00:06:52.539 "bdev_name": "Malloc0" 00:06:52.539 }, 00:06:52.539 { 00:06:52.539 "nbd_device": "/dev/nbd1", 00:06:52.539 "bdev_name": "Malloc1" 00:06:52.539 } 00:06:52.539 ]' 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.539 { 00:06:52.539 "nbd_device": "/dev/nbd0", 00:06:52.539 "bdev_name": "Malloc0" 00:06:52.539 }, 00:06:52.539 { 00:06:52.539 "nbd_device": "/dev/nbd1", 00:06:52.539 "bdev_name": "Malloc1" 00:06:52.539 } 00:06:52.539 ]' 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.539 /dev/nbd1' 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.539 /dev/nbd1' 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.539 256+0 records in 00:06:52.539 256+0 records out 00:06:52.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00766886 s, 137 MB/s 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.539 256+0 records in 00:06:52.539 256+0 records out 00:06:52.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260285 s, 40.3 MB/s 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.539 256+0 records in 00:06:52.539 256+0 records out 00:06:52.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304466 s, 34.4 MB/s 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.539 11:09:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.798 11:09:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.798 11:09:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.798 11:09:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.798 11:09:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.798 11:09:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.798 11:09:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.798 11:09:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.798 11:09:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.798 11:09:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.798 11:09:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.056 11:09:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.056 11:09:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.057 11:09:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.057 11:09:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.057 11:09:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.057 11:09:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.057 11:09:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.057 11:09:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.315 11:09:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.315 11:09:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.315 11:09:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.574 11:10:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.574 11:10:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.574 11:10:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.574 11:10:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.574 11:10:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.574 11:10:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.574 11:10:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.574 11:10:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.574 11:10:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.574 11:10:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.574 11:10:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.574 11:10:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.574 11:10:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.141 11:10:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.077 [2024-12-10 11:10:01.647652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.077 [2024-12-10 11:10:01.740896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.077 [2024-12-10 11:10:01.741028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.335 [2024-12-10 11:10:01.904740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.335 [2024-12-10 11:10:01.904897] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.335 [2024-12-10 11:10:01.904950] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.283 11:10:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59213 /var/tmp/spdk-nbd.sock 00:06:57.283 11:10:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59213 ']' 00:06:57.283 11:10:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.283 11:10:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.283 11:10:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.283 11:10:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.283 11:10:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.283 11:10:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.283 11:10:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:57.283 11:10:04 event.app_repeat -- event/event.sh@39 -- # killprocess 59213 00:06:57.283 11:10:04 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59213 ']' 00:06:57.283 11:10:04 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59213 00:06:57.283 11:10:04 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:57.283 11:10:04 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.283 11:10:04 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59213 00:06:57.283 killing process with pid 59213 00:06:57.283 11:10:04 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.283 11:10:04 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.283 11:10:04 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59213' 00:06:57.283 11:10:04 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59213 00:06:57.283 11:10:04 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59213 00:06:58.221 spdk_app_start is called in Round 0. 00:06:58.221 Shutdown signal received, stop current app iteration 00:06:58.221 Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 reinitialization... 00:06:58.221 spdk_app_start is called in Round 1. 00:06:58.221 Shutdown signal received, stop current app iteration 00:06:58.221 Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 reinitialization... 00:06:58.221 spdk_app_start is called in Round 2. 00:06:58.221 Shutdown signal received, stop current app iteration 00:06:58.221 Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 reinitialization... 00:06:58.221 spdk_app_start is called in Round 3. 00:06:58.221 Shutdown signal received, stop current app iteration 00:06:58.221 11:10:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:58.221 11:10:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:58.221 00:06:58.221 real 0m21.366s 00:06:58.221 user 0m47.811s 00:06:58.221 sys 0m2.779s 00:06:58.221 11:10:04 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.221 11:10:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.221 ************************************ 00:06:58.221 END TEST app_repeat 00:06:58.221 ************************************ 00:06:58.221 11:10:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:58.221 11:10:04 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:58.221 11:10:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.221 11:10:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.221 11:10:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.221 ************************************ 00:06:58.221 START TEST cpu_locks 00:06:58.221 ************************************ 00:06:58.221 11:10:04 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:58.221 * Looking for test storage... 00:06:58.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:58.221 11:10:05 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:58.221 11:10:05 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:58.482 11:10:05 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:58.482 11:10:05 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.482 11:10:05 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:58.482 11:10:05 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.482 11:10:05 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:58.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.482 --rc genhtml_branch_coverage=1 00:06:58.482 --rc genhtml_function_coverage=1 00:06:58.482 --rc genhtml_legend=1 00:06:58.482 --rc geninfo_all_blocks=1 00:06:58.482 --rc geninfo_unexecuted_blocks=1 00:06:58.482 00:06:58.482 ' 00:06:58.482 11:10:05 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:58.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.482 --rc genhtml_branch_coverage=1 00:06:58.482 --rc genhtml_function_coverage=1 00:06:58.482 --rc genhtml_legend=1 00:06:58.482 --rc geninfo_all_blocks=1 00:06:58.482 --rc geninfo_unexecuted_blocks=1 00:06:58.482 00:06:58.482 ' 00:06:58.482 11:10:05 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:58.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.482 --rc genhtml_branch_coverage=1 00:06:58.482 --rc genhtml_function_coverage=1 00:06:58.482 --rc genhtml_legend=1 00:06:58.482 --rc geninfo_all_blocks=1 00:06:58.482 --rc geninfo_unexecuted_blocks=1 00:06:58.482 00:06:58.482 ' 00:06:58.482 11:10:05 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:58.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.482 --rc genhtml_branch_coverage=1 00:06:58.482 --rc genhtml_function_coverage=1 00:06:58.482 --rc genhtml_legend=1 00:06:58.482 --rc geninfo_all_blocks=1 00:06:58.482 --rc geninfo_unexecuted_blocks=1 00:06:58.482 00:06:58.482 ' 00:06:58.482 11:10:05 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:58.482 11:10:05 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:58.482 11:10:05 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:58.482 11:10:05 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:58.482 11:10:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.482 11:10:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.482 11:10:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.482 ************************************ 00:06:58.482 START TEST default_locks 00:06:58.482 ************************************ 00:06:58.482 11:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:58.482 11:10:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59682 00:06:58.482 11:10:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59682 00:06:58.482 11:10:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.482 11:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59682 ']' 00:06:58.482 11:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.482 11:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.482 11:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.482 11:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.482 11:10:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.740 [2024-12-10 11:10:05.348512] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:06:58.740 [2024-12-10 11:10:05.348740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59682 ] 00:06:58.740 [2024-12-10 11:10:05.529825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.998 [2024-12-10 11:10:05.622274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.998 [2024-12-10 11:10:05.823311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.565 11:10:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.565 11:10:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:59.565 11:10:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59682 00:06:59.565 11:10:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.565 11:10:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59682 00:07:00.133 11:10:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59682 00:07:00.133 11:10:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59682 ']' 00:07:00.133 11:10:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59682 00:07:00.133 11:10:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:00.133 11:10:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.133 11:10:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59682 00:07:00.133 11:10:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.133 11:10:06 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.133 killing process with pid 59682 00:07:00.133 11:10:06 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59682' 00:07:00.133 11:10:06 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59682 00:07:00.133 11:10:06 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59682 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59682 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59682 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59682 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59682 ']' 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.038 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59682) - No such process 00:07:02.038 ERROR: process (pid: 59682) is no longer running 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:02.038 00:07:02.038 real 0m3.621s 00:07:02.038 user 0m3.795s 00:07:02.038 sys 0m0.651s 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.038 ************************************ 00:07:02.038 11:10:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.038 END TEST default_locks 00:07:02.038 ************************************ 00:07:02.038 11:10:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:02.038 11:10:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.038 11:10:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.038 11:10:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.038 ************************************ 00:07:02.038 START TEST default_locks_via_rpc 00:07:02.038 ************************************ 00:07:02.038 11:10:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:02.038 11:10:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59756 00:07:02.038 11:10:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59756 00:07:02.038 11:10:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59756 ']' 00:07:02.038 11:10:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.038 11:10:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.038 11:10:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.038 11:10:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.038 11:10:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.038 11:10:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.297 [2024-12-10 11:10:08.945288] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:02.297 [2024-12-10 11:10:08.945476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59756 ] 00:07:02.297 [2024-12-10 11:10:09.117764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.556 [2024-12-10 11:10:09.219486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.814 [2024-12-10 11:10:09.433377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59756 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59756 00:07:03.382 11:10:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:03.641 11:10:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59756 00:07:03.641 11:10:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59756 ']' 00:07:03.641 11:10:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59756 00:07:03.641 11:10:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:03.641 11:10:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.641 11:10:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59756 00:07:03.641 11:10:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.641 11:10:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.641 killing process with pid 59756 00:07:03.641 11:10:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59756' 00:07:03.641 11:10:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59756 00:07:03.641 11:10:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59756 00:07:06.205 00:07:06.205 real 0m3.592s 00:07:06.205 user 0m3.760s 00:07:06.205 sys 0m0.568s 00:07:06.205 11:10:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.205 ************************************ 00:07:06.205 END TEST default_locks_via_rpc 00:07:06.205 ************************************ 00:07:06.205 11:10:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.205 11:10:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:06.205 11:10:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.205 11:10:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.205 11:10:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.205 ************************************ 00:07:06.205 START TEST non_locking_app_on_locked_coremask 00:07:06.205 ************************************ 00:07:06.205 11:10:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:06.205 11:10:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59830 00:07:06.205 11:10:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59830 /var/tmp/spdk.sock 00:07:06.205 11:10:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59830 ']' 00:07:06.205 11:10:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.205 11:10:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.205 11:10:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.205 11:10:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.205 11:10:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.205 11:10:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.205 [2024-12-10 11:10:12.587087] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:06.205 [2024-12-10 11:10:12.587324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59830 ] 00:07:06.205 [2024-12-10 11:10:12.760959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.205 [2024-12-10 11:10:12.859070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.464 [2024-12-10 11:10:13.074583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.032 11:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.032 11:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:07.032 11:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:07.032 11:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59846 00:07:07.032 11:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59846 /var/tmp/spdk2.sock 00:07:07.032 11:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59846 ']' 00:07:07.032 11:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.032 11:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.032 11:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.032 11:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.032 11:10:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.032 [2024-12-10 11:10:13.689630] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:07.032 [2024-12-10 11:10:13.689781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59846 ] 00:07:07.291 [2024-12-10 11:10:13.876869] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.291 [2024-12-10 11:10:13.876942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.291 [2024-12-10 11:10:14.065679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.860 [2024-12-10 11:10:14.540933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.760 11:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.760 11:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:09.760 11:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59830 00:07:09.760 11:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59830 00:07:09.760 11:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.695 11:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59830 00:07:10.696 11:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59830 ']' 00:07:10.696 11:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59830 00:07:10.696 11:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:10.696 11:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.696 11:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59830 00:07:10.696 11:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.696 11:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.696 killing process with pid 59830 00:07:10.696 11:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59830' 00:07:10.696 11:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59830 00:07:10.696 11:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59830 00:07:14.884 11:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59846 00:07:14.884 11:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59846 ']' 00:07:14.884 11:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59846 00:07:14.884 11:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:14.884 11:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.884 11:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59846 00:07:14.884 11:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.884 11:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.884 killing process with pid 59846 00:07:14.884 11:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59846' 00:07:14.884 11:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59846 00:07:14.884 11:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59846 00:07:16.787 00:07:16.787 real 0m11.121s 00:07:16.787 user 0m11.756s 00:07:16.787 sys 0m1.354s 00:07:16.787 11:10:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.787 11:10:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.787 ************************************ 00:07:16.787 END TEST non_locking_app_on_locked_coremask 00:07:16.787 ************************************ 00:07:17.046 11:10:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:17.046 11:10:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.046 11:10:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.046 11:10:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.046 ************************************ 00:07:17.046 START TEST locking_app_on_unlocked_coremask 00:07:17.046 ************************************ 00:07:17.046 11:10:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:17.046 11:10:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59991 00:07:17.046 11:10:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:17.046 11:10:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59991 /var/tmp/spdk.sock 00:07:17.046 11:10:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59991 ']' 00:07:17.046 11:10:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.046 11:10:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.046 11:10:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.046 11:10:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.046 11:10:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.046 [2024-12-10 11:10:23.780105] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:17.046 [2024-12-10 11:10:23.780973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59991 ] 00:07:17.305 [2024-12-10 11:10:23.953079] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:17.305 [2024-12-10 11:10:23.953149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.305 [2024-12-10 11:10:24.060239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.563 [2024-12-10 11:10:24.311735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.130 11:10:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.130 11:10:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:18.131 11:10:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60013 00:07:18.131 11:10:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60013 /var/tmp/spdk2.sock 00:07:18.131 11:10:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:18.131 11:10:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60013 ']' 00:07:18.131 11:10:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.131 11:10:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.131 11:10:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.131 11:10:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.131 11:10:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.389 [2024-12-10 11:10:24.979161] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:18.389 [2024-12-10 11:10:24.979387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60013 ] 00:07:18.389 [2024-12-10 11:10:25.182442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.648 [2024-12-10 11:10:25.378426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.215 [2024-12-10 11:10:25.807706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.151 11:10:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.151 11:10:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:20.151 11:10:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60013 00:07:20.151 11:10:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60013 00:07:20.151 11:10:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:21.087 11:10:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59991 00:07:21.087 11:10:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59991 ']' 00:07:21.087 11:10:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59991 00:07:21.087 11:10:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:21.087 11:10:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.087 11:10:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59991 00:07:21.087 11:10:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.087 killing process with pid 59991 00:07:21.087 11:10:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.087 11:10:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59991' 00:07:21.087 11:10:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59991 00:07:21.087 11:10:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59991 00:07:25.276 11:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60013 00:07:25.276 11:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60013 ']' 00:07:25.276 11:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60013 00:07:25.276 11:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:25.276 11:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.276 11:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60013 00:07:25.276 11:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.276 11:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.276 11:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60013' 00:07:25.276 killing process with pid 60013 00:07:25.276 11:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60013 00:07:25.276 11:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60013 00:07:27.809 00:07:27.809 real 0m10.390s 00:07:27.809 user 0m11.006s 00:07:27.809 sys 0m1.282s 00:07:27.809 11:10:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.809 11:10:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.809 ************************************ 00:07:27.809 END TEST locking_app_on_unlocked_coremask 00:07:27.809 ************************************ 00:07:27.809 11:10:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:27.809 11:10:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.809 11:10:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.809 11:10:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.809 ************************************ 00:07:27.809 START TEST locking_app_on_locked_coremask 00:07:27.809 ************************************ 00:07:27.809 11:10:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:27.809 11:10:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60143 00:07:27.809 11:10:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60143 /var/tmp/spdk.sock 00:07:27.809 11:10:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:27.809 11:10:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60143 ']' 00:07:27.809 11:10:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.809 11:10:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.809 11:10:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.809 11:10:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.809 11:10:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.809 [2024-12-10 11:10:34.232818] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:27.809 [2024-12-10 11:10:34.233003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60143 ] 00:07:27.809 [2024-12-10 11:10:34.419532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.809 [2024-12-10 11:10:34.526346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.067 [2024-12-10 11:10:34.760266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60164 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60164 /var/tmp/spdk2.sock 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60164 /var/tmp/spdk2.sock 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60164 /var/tmp/spdk2.sock 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60164 ']' 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.670 11:10:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.928 [2024-12-10 11:10:35.500584] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:28.928 [2024-12-10 11:10:35.500744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60164 ] 00:07:28.928 [2024-12-10 11:10:35.700728] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60143 has claimed it. 00:07:28.928 [2024-12-10 11:10:35.700828] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:29.496 ERROR: process (pid: 60164) is no longer running 00:07:29.496 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60164) - No such process 00:07:29.496 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.496 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:29.496 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:29.496 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.496 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:29.496 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.496 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60143 00:07:29.496 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60143 00:07:29.496 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:30.063 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60143 00:07:30.063 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60143 ']' 00:07:30.063 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60143 00:07:30.063 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:30.063 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.063 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60143 00:07:30.063 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.063 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.063 killing process with pid 60143 00:07:30.063 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60143' 00:07:30.063 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60143 00:07:30.063 11:10:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60143 00:07:31.962 00:07:31.962 real 0m4.683s 00:07:31.962 user 0m5.105s 00:07:31.962 sys 0m0.875s 00:07:31.962 11:10:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.962 11:10:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.962 ************************************ 00:07:31.962 END TEST locking_app_on_locked_coremask 00:07:31.962 ************************************ 00:07:32.221 11:10:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:32.221 11:10:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.221 11:10:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.221 11:10:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.221 ************************************ 00:07:32.221 START TEST locking_overlapped_coremask 00:07:32.221 ************************************ 00:07:32.221 11:10:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:32.221 11:10:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60230 00:07:32.221 11:10:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60230 /var/tmp/spdk.sock 00:07:32.221 11:10:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:32.221 11:10:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60230 ']' 00:07:32.221 11:10:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.221 11:10:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.221 11:10:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.221 11:10:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.221 11:10:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.221 [2024-12-10 11:10:38.940375] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:32.221 [2024-12-10 11:10:38.940525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60230 ] 00:07:32.479 [2024-12-10 11:10:39.120652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.479 [2024-12-10 11:10:39.250430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.479 [2024-12-10 11:10:39.250535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.479 [2024-12-10 11:10:39.251173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.737 [2024-12-10 11:10:39.499136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.304 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.304 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:33.304 11:10:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60254 00:07:33.304 11:10:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60254 /var/tmp/spdk2.sock 00:07:33.304 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:33.304 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60254 /var/tmp/spdk2.sock 00:07:33.304 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:33.304 11:10:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:33.304 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.304 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:33.304 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.304 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60254 /var/tmp/spdk2.sock 00:07:33.304 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60254 ']' 00:07:33.304 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.304 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.305 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.305 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.305 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.596 [2024-12-10 11:10:40.178881] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:33.596 [2024-12-10 11:10:40.179042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60254 ] 00:07:33.596 [2024-12-10 11:10:40.379396] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60230 has claimed it. 00:07:33.596 [2024-12-10 11:10:40.383685] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:34.165 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60254) - No such process 00:07:34.165 ERROR: process (pid: 60254) is no longer running 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60230 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60230 ']' 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60230 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60230 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60230' 00:07:34.165 killing process with pid 60230 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60230 00:07:34.165 11:10:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60230 00:07:36.702 00:07:36.702 real 0m4.202s 00:07:36.702 user 0m11.631s 00:07:36.702 sys 0m0.543s 00:07:36.702 11:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.702 11:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.702 ************************************ 00:07:36.702 END TEST locking_overlapped_coremask 00:07:36.702 ************************************ 00:07:36.702 11:10:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:36.702 11:10:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.702 11:10:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.702 11:10:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.702 ************************************ 00:07:36.702 START TEST locking_overlapped_coremask_via_rpc 00:07:36.702 ************************************ 00:07:36.702 11:10:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:36.702 11:10:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60318 00:07:36.702 11:10:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60318 /var/tmp/spdk.sock 00:07:36.702 11:10:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:36.702 11:10:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60318 ']' 00:07:36.702 11:10:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.702 11:10:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.702 11:10:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.702 11:10:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.702 11:10:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.702 [2024-12-10 11:10:43.207132] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:36.702 [2024-12-10 11:10:43.207312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60318 ] 00:07:36.702 [2024-12-10 11:10:43.379719] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:36.702 [2024-12-10 11:10:43.379779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.702 [2024-12-10 11:10:43.497366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.702 [2024-12-10 11:10:43.497524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.702 [2024-12-10 11:10:43.497535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.961 [2024-12-10 11:10:43.733595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.556 11:10:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.557 11:10:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:37.557 11:10:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60336 00:07:37.557 11:10:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:37.557 11:10:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60336 /var/tmp/spdk2.sock 00:07:37.557 11:10:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60336 ']' 00:07:37.557 11:10:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.557 11:10:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.557 11:10:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.557 11:10:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.557 11:10:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.817 [2024-12-10 11:10:44.426124] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:37.817 [2024-12-10 11:10:44.426288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60336 ] 00:07:37.817 [2024-12-10 11:10:44.619124] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:37.817 [2024-12-10 11:10:44.623395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.389 [2024-12-10 11:10:44.936313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.389 [2024-12-10 11:10:44.936433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.389 [2024-12-10 11:10:44.936439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:38.647 [2024-12-10 11:10:45.420837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.548 [2024-12-10 11:10:47.273588] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60318 has claimed it. 00:07:40.548 request: 00:07:40.548 { 00:07:40.548 "method": "framework_enable_cpumask_locks", 00:07:40.548 "req_id": 1 00:07:40.548 } 00:07:40.548 Got JSON-RPC error response 00:07:40.548 response: 00:07:40.548 { 00:07:40.548 "code": -32603, 00:07:40.548 "message": "Failed to claim CPU core: 2" 00:07:40.548 } 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60318 /var/tmp/spdk.sock 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60318 ']' 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.548 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.807 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.807 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:40.807 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60336 /var/tmp/spdk2.sock 00:07:40.807 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60336 ']' 00:07:40.807 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:40.807 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.807 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:40.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:40.807 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.807 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.373 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.373 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:41.373 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:41.373 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:41.373 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:41.373 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:41.373 00:07:41.373 real 0m4.808s 00:07:41.373 user 0m1.955s 00:07:41.373 sys 0m0.221s 00:07:41.373 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.373 ************************************ 00:07:41.373 END TEST locking_overlapped_coremask_via_rpc 00:07:41.373 ************************************ 00:07:41.373 11:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.373 11:10:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:41.373 11:10:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60318 ]] 00:07:41.373 11:10:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60318 00:07:41.373 11:10:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60318 ']' 00:07:41.373 11:10:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60318 00:07:41.373 11:10:47 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:41.373 11:10:47 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.373 11:10:47 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60318 00:07:41.373 11:10:47 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.373 11:10:47 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.373 killing process with pid 60318 00:07:41.373 11:10:47 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60318' 00:07:41.373 11:10:47 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60318 00:07:41.373 11:10:47 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60318 00:07:43.274 11:10:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60336 ]] 00:07:43.274 11:10:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60336 00:07:43.274 11:10:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60336 ']' 00:07:43.274 11:10:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60336 00:07:43.274 11:10:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:43.274 11:10:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.274 11:10:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60336 00:07:43.274 11:10:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:43.274 11:10:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:43.274 killing process with pid 60336 00:07:43.274 11:10:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60336' 00:07:43.274 11:10:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60336 00:07:43.274 11:10:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60336 00:07:45.807 11:10:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:45.807 11:10:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:45.807 11:10:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60318 ]] 00:07:45.807 11:10:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60318 00:07:45.807 11:10:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60318 ']' 00:07:45.807 11:10:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60318 00:07:45.807 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60318) - No such process 00:07:45.807 Process with pid 60318 is not found 00:07:45.807 11:10:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60318 is not found' 00:07:45.807 11:10:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60336 ]] 00:07:45.807 11:10:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60336 00:07:45.807 11:10:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60336 ']' 00:07:45.808 11:10:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60336 00:07:45.808 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60336) - No such process 00:07:45.808 Process with pid 60336 is not found 00:07:45.808 11:10:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60336 is not found' 00:07:45.808 11:10:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:45.808 00:07:45.808 real 0m47.295s 00:07:45.808 user 1m24.448s 00:07:45.808 sys 0m6.518s 00:07:45.808 11:10:52 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.808 11:10:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.808 ************************************ 00:07:45.808 END TEST cpu_locks 00:07:45.808 ************************************ 00:07:45.808 00:07:45.808 real 1m18.193s 00:07:45.808 user 2m26.993s 00:07:45.808 sys 0m10.328s 00:07:45.808 11:10:52 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.808 11:10:52 event -- common/autotest_common.sh@10 -- # set +x 00:07:45.808 ************************************ 00:07:45.808 END TEST event 00:07:45.808 ************************************ 00:07:45.808 11:10:52 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:45.808 11:10:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.808 11:10:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.808 11:10:52 -- common/autotest_common.sh@10 -- # set +x 00:07:45.808 ************************************ 00:07:45.808 START TEST thread 00:07:45.808 ************************************ 00:07:45.808 11:10:52 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:45.808 * Looking for test storage... 00:07:45.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:45.808 11:10:52 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.808 11:10:52 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.808 11:10:52 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.808 11:10:52 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.808 11:10:52 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.808 11:10:52 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.808 11:10:52 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.808 11:10:52 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.808 11:10:52 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.808 11:10:52 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.808 11:10:52 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.808 11:10:52 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.808 11:10:52 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.808 11:10:52 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.808 11:10:52 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.808 11:10:52 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:45.808 11:10:52 thread -- scripts/common.sh@345 -- # : 1 00:07:45.808 11:10:52 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.808 11:10:52 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.808 11:10:52 thread -- scripts/common.sh@365 -- # decimal 1 00:07:45.808 11:10:52 thread -- scripts/common.sh@353 -- # local d=1 00:07:45.808 11:10:52 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.808 11:10:52 thread -- scripts/common.sh@355 -- # echo 1 00:07:45.808 11:10:52 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.808 11:10:52 thread -- scripts/common.sh@366 -- # decimal 2 00:07:45.808 11:10:52 thread -- scripts/common.sh@353 -- # local d=2 00:07:45.808 11:10:52 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.808 11:10:52 thread -- scripts/common.sh@355 -- # echo 2 00:07:45.808 11:10:52 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.808 11:10:52 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.808 11:10:52 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.808 11:10:52 thread -- scripts/common.sh@368 -- # return 0 00:07:45.808 11:10:52 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.808 11:10:52 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.808 --rc genhtml_branch_coverage=1 00:07:45.808 --rc genhtml_function_coverage=1 00:07:45.808 --rc genhtml_legend=1 00:07:45.808 --rc geninfo_all_blocks=1 00:07:45.808 --rc geninfo_unexecuted_blocks=1 00:07:45.808 00:07:45.808 ' 00:07:45.808 11:10:52 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.808 --rc genhtml_branch_coverage=1 00:07:45.808 --rc genhtml_function_coverage=1 00:07:45.808 --rc genhtml_legend=1 00:07:45.808 --rc geninfo_all_blocks=1 00:07:45.808 --rc geninfo_unexecuted_blocks=1 00:07:45.808 00:07:45.808 ' 00:07:45.808 11:10:52 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.808 --rc genhtml_branch_coverage=1 00:07:45.808 --rc genhtml_function_coverage=1 00:07:45.808 --rc genhtml_legend=1 00:07:45.808 --rc geninfo_all_blocks=1 00:07:45.808 --rc geninfo_unexecuted_blocks=1 00:07:45.808 00:07:45.808 ' 00:07:45.808 11:10:52 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.808 --rc genhtml_branch_coverage=1 00:07:45.808 --rc genhtml_function_coverage=1 00:07:45.808 --rc genhtml_legend=1 00:07:45.808 --rc geninfo_all_blocks=1 00:07:45.808 --rc geninfo_unexecuted_blocks=1 00:07:45.808 00:07:45.808 ' 00:07:45.808 11:10:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:45.808 11:10:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:45.808 11:10:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.808 11:10:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.808 ************************************ 00:07:45.808 START TEST thread_poller_perf 00:07:45.808 ************************************ 00:07:45.808 11:10:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:45.808 [2024-12-10 11:10:52.594574] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:45.808 [2024-12-10 11:10:52.594750] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60531 ] 00:07:46.068 [2024-12-10 11:10:52.782494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.327 [2024-12-10 11:10:52.928341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.327 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:47.726 [2024-12-10T11:10:54.552Z] ====================================== 00:07:47.726 [2024-12-10T11:10:54.552Z] busy:2213524815 (cyc) 00:07:47.726 [2024-12-10T11:10:54.552Z] total_run_count: 273000 00:07:47.726 [2024-12-10T11:10:54.552Z] tsc_hz: 2200000000 (cyc) 00:07:47.726 [2024-12-10T11:10:54.552Z] ====================================== 00:07:47.726 [2024-12-10T11:10:54.552Z] poller_cost: 8108 (cyc), 3685 (nsec) 00:07:47.726 00:07:47.726 real 0m1.605s 00:07:47.726 user 0m1.398s 00:07:47.726 sys 0m0.098s 00:07:47.726 11:10:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.726 ************************************ 00:07:47.726 END TEST thread_poller_perf 00:07:47.726 ************************************ 00:07:47.726 11:10:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:47.726 11:10:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:47.726 11:10:54 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:47.726 11:10:54 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.726 11:10:54 thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.726 ************************************ 00:07:47.726 START TEST thread_poller_perf 00:07:47.726 ************************************ 00:07:47.726 11:10:54 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:47.726 [2024-12-10 11:10:54.239411] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:47.726 [2024-12-10 11:10:54.239719] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60567 ] 00:07:47.726 [2024-12-10 11:10:54.418234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.984 [2024-12-10 11:10:54.553755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.984 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:49.358 [2024-12-10T11:10:56.184Z] ====================================== 00:07:49.358 [2024-12-10T11:10:56.184Z] busy:2204800775 (cyc) 00:07:49.358 [2024-12-10T11:10:56.184Z] total_run_count: 3049000 00:07:49.358 [2024-12-10T11:10:56.184Z] tsc_hz: 2200000000 (cyc) 00:07:49.358 [2024-12-10T11:10:56.184Z] ====================================== 00:07:49.358 [2024-12-10T11:10:56.184Z] poller_cost: 723 (cyc), 328 (nsec) 00:07:49.358 00:07:49.358 real 0m1.586s 00:07:49.358 user 0m1.396s 00:07:49.358 sys 0m0.079s 00:07:49.358 11:10:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.358 11:10:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:49.358 ************************************ 00:07:49.358 END TEST thread_poller_perf 00:07:49.358 ************************************ 00:07:49.358 11:10:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:49.358 00:07:49.358 real 0m3.483s 00:07:49.358 user 0m2.945s 00:07:49.358 sys 0m0.315s 00:07:49.358 11:10:55 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.358 11:10:55 thread -- common/autotest_common.sh@10 -- # set +x 00:07:49.358 ************************************ 00:07:49.358 END TEST thread 00:07:49.358 ************************************ 00:07:49.358 11:10:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:49.358 11:10:55 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:49.358 11:10:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.358 11:10:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.358 11:10:55 -- common/autotest_common.sh@10 -- # set +x 00:07:49.358 ************************************ 00:07:49.358 START TEST app_cmdline 00:07:49.358 ************************************ 00:07:49.358 11:10:55 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:49.358 * Looking for test storage... 00:07:49.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:49.358 11:10:55 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:49.358 11:10:55 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:49.358 11:10:55 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:49.358 11:10:56 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:49.358 11:10:56 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.358 11:10:56 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.359 11:10:56 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:49.359 11:10:56 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.359 11:10:56 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:49.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.359 --rc genhtml_branch_coverage=1 00:07:49.359 --rc genhtml_function_coverage=1 00:07:49.359 --rc genhtml_legend=1 00:07:49.359 --rc geninfo_all_blocks=1 00:07:49.359 --rc geninfo_unexecuted_blocks=1 00:07:49.359 00:07:49.359 ' 00:07:49.359 11:10:56 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:49.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.359 --rc genhtml_branch_coverage=1 00:07:49.359 --rc genhtml_function_coverage=1 00:07:49.359 --rc genhtml_legend=1 00:07:49.359 --rc geninfo_all_blocks=1 00:07:49.359 --rc geninfo_unexecuted_blocks=1 00:07:49.359 00:07:49.359 ' 00:07:49.359 11:10:56 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:49.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.359 --rc genhtml_branch_coverage=1 00:07:49.359 --rc genhtml_function_coverage=1 00:07:49.359 --rc genhtml_legend=1 00:07:49.359 --rc geninfo_all_blocks=1 00:07:49.359 --rc geninfo_unexecuted_blocks=1 00:07:49.359 00:07:49.359 ' 00:07:49.359 11:10:56 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:49.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.359 --rc genhtml_branch_coverage=1 00:07:49.359 --rc genhtml_function_coverage=1 00:07:49.359 --rc genhtml_legend=1 00:07:49.359 --rc geninfo_all_blocks=1 00:07:49.359 --rc geninfo_unexecuted_blocks=1 00:07:49.359 00:07:49.359 ' 00:07:49.359 11:10:56 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:49.359 11:10:56 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60651 00:07:49.359 11:10:56 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60651 00:07:49.359 11:10:56 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:49.359 11:10:56 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60651 ']' 00:07:49.359 11:10:56 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.359 11:10:56 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.359 11:10:56 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.359 11:10:56 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.359 11:10:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:49.617 [2024-12-10 11:10:56.216625] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:49.617 [2024-12-10 11:10:56.217268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60651 ] 00:07:49.617 [2024-12-10 11:10:56.399781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.874 [2024-12-10 11:10:56.511978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.133 [2024-12-10 11:10:56.735716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.700 11:10:57 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.700 11:10:57 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:50.700 11:10:57 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:50.959 { 00:07:50.959 "version": "SPDK v25.01-pre git sha1 52a413487", 00:07:50.959 "fields": { 00:07:50.959 "major": 25, 00:07:50.959 "minor": 1, 00:07:50.959 "patch": 0, 00:07:50.959 "suffix": "-pre", 00:07:50.959 "commit": "52a413487" 00:07:50.959 } 00:07:50.959 } 00:07:50.959 11:10:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:50.959 11:10:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:50.959 11:10:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:50.959 11:10:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:50.959 11:10:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:50.959 11:10:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:50.959 11:10:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:50.959 11:10:57 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.959 11:10:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:50.959 11:10:57 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.959 11:10:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:50.959 11:10:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:50.959 11:10:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:50.959 11:10:57 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:50.959 11:10:57 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:50.959 11:10:57 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.959 11:10:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.959 11:10:57 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.959 11:10:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.959 11:10:57 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.959 11:10:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.959 11:10:57 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.959 11:10:57 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:50.959 11:10:57 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:51.218 request: 00:07:51.218 { 00:07:51.218 "method": "env_dpdk_get_mem_stats", 00:07:51.218 "req_id": 1 00:07:51.218 } 00:07:51.218 Got JSON-RPC error response 00:07:51.218 response: 00:07:51.218 { 00:07:51.218 "code": -32601, 00:07:51.218 "message": "Method not found" 00:07:51.218 } 00:07:51.218 11:10:57 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:51.218 11:10:57 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.218 11:10:57 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:51.218 11:10:57 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.218 11:10:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60651 00:07:51.218 11:10:57 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60651 ']' 00:07:51.218 11:10:57 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60651 00:07:51.218 11:10:57 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:51.218 11:10:57 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.218 11:10:57 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60651 00:07:51.218 11:10:57 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.218 11:10:57 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.218 killing process with pid 60651 00:07:51.218 11:10:57 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60651' 00:07:51.219 11:10:57 app_cmdline -- common/autotest_common.sh@973 -- # kill 60651 00:07:51.219 11:10:57 app_cmdline -- common/autotest_common.sh@978 -- # wait 60651 00:07:53.752 00:07:53.752 real 0m4.307s 00:07:53.752 user 0m4.928s 00:07:53.752 sys 0m0.544s 00:07:53.752 11:11:00 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.752 11:11:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:53.752 ************************************ 00:07:53.752 END TEST app_cmdline 00:07:53.752 ************************************ 00:07:53.752 11:11:00 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:53.752 11:11:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.752 11:11:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.752 11:11:00 -- common/autotest_common.sh@10 -- # set +x 00:07:53.752 ************************************ 00:07:53.752 START TEST version 00:07:53.752 ************************************ 00:07:53.752 11:11:00 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:53.752 * Looking for test storage... 00:07:53.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:53.752 11:11:00 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:53.752 11:11:00 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:53.752 11:11:00 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:53.752 11:11:00 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:53.752 11:11:00 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.752 11:11:00 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.752 11:11:00 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.752 11:11:00 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.752 11:11:00 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.752 11:11:00 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.752 11:11:00 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.752 11:11:00 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.752 11:11:00 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.752 11:11:00 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.752 11:11:00 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.752 11:11:00 version -- scripts/common.sh@344 -- # case "$op" in 00:07:53.752 11:11:00 version -- scripts/common.sh@345 -- # : 1 00:07:53.752 11:11:00 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.752 11:11:00 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.752 11:11:00 version -- scripts/common.sh@365 -- # decimal 1 00:07:53.752 11:11:00 version -- scripts/common.sh@353 -- # local d=1 00:07:53.752 11:11:00 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.752 11:11:00 version -- scripts/common.sh@355 -- # echo 1 00:07:53.752 11:11:00 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.752 11:11:00 version -- scripts/common.sh@366 -- # decimal 2 00:07:53.752 11:11:00 version -- scripts/common.sh@353 -- # local d=2 00:07:53.752 11:11:00 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.752 11:11:00 version -- scripts/common.sh@355 -- # echo 2 00:07:53.752 11:11:00 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.752 11:11:00 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.752 11:11:00 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.752 11:11:00 version -- scripts/common.sh@368 -- # return 0 00:07:53.752 11:11:00 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.752 11:11:00 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:53.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.752 --rc genhtml_branch_coverage=1 00:07:53.752 --rc genhtml_function_coverage=1 00:07:53.752 --rc genhtml_legend=1 00:07:53.752 --rc geninfo_all_blocks=1 00:07:53.752 --rc geninfo_unexecuted_blocks=1 00:07:53.752 00:07:53.752 ' 00:07:53.752 11:11:00 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:53.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.753 --rc genhtml_branch_coverage=1 00:07:53.753 --rc genhtml_function_coverage=1 00:07:53.753 --rc genhtml_legend=1 00:07:53.753 --rc geninfo_all_blocks=1 00:07:53.753 --rc geninfo_unexecuted_blocks=1 00:07:53.753 00:07:53.753 ' 00:07:53.753 11:11:00 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:53.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.753 --rc genhtml_branch_coverage=1 00:07:53.753 --rc genhtml_function_coverage=1 00:07:53.753 --rc genhtml_legend=1 00:07:53.753 --rc geninfo_all_blocks=1 00:07:53.753 --rc geninfo_unexecuted_blocks=1 00:07:53.753 00:07:53.753 ' 00:07:53.753 11:11:00 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:53.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.753 --rc genhtml_branch_coverage=1 00:07:53.753 --rc genhtml_function_coverage=1 00:07:53.753 --rc genhtml_legend=1 00:07:53.753 --rc geninfo_all_blocks=1 00:07:53.753 --rc geninfo_unexecuted_blocks=1 00:07:53.753 00:07:53.753 ' 00:07:53.753 11:11:00 version -- app/version.sh@17 -- # get_header_version major 00:07:53.753 11:11:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.753 11:11:00 version -- app/version.sh@14 -- # cut -f2 00:07:53.753 11:11:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.753 11:11:00 version -- app/version.sh@17 -- # major=25 00:07:53.753 11:11:00 version -- app/version.sh@18 -- # get_header_version minor 00:07:53.753 11:11:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.753 11:11:00 version -- app/version.sh@14 -- # cut -f2 00:07:53.753 11:11:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.753 11:11:00 version -- app/version.sh@18 -- # minor=1 00:07:53.753 11:11:00 version -- app/version.sh@19 -- # get_header_version patch 00:07:53.753 11:11:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.753 11:11:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.753 11:11:00 version -- app/version.sh@14 -- # cut -f2 00:07:53.753 11:11:00 version -- app/version.sh@19 -- # patch=0 00:07:53.753 11:11:00 version -- app/version.sh@20 -- # get_header_version suffix 00:07:53.753 11:11:00 version -- app/version.sh@14 -- # cut -f2 00:07:53.753 11:11:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.753 11:11:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.753 11:11:00 version -- app/version.sh@20 -- # suffix=-pre 00:07:53.753 11:11:00 version -- app/version.sh@22 -- # version=25.1 00:07:53.753 11:11:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:53.753 11:11:00 version -- app/version.sh@28 -- # version=25.1rc0 00:07:53.753 11:11:00 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:53.753 11:11:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:53.753 11:11:00 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:53.753 11:11:00 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:53.753 00:07:53.753 real 0m0.261s 00:07:53.753 user 0m0.176s 00:07:53.753 sys 0m0.121s 00:07:53.753 11:11:00 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.753 ************************************ 00:07:53.753 END TEST version 00:07:53.753 ************************************ 00:07:53.753 11:11:00 version -- common/autotest_common.sh@10 -- # set +x 00:07:53.753 11:11:00 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:53.753 11:11:00 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:53.753 11:11:00 -- spdk/autotest.sh@194 -- # uname -s 00:07:53.753 11:11:00 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:53.753 11:11:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:53.753 11:11:00 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:53.753 11:11:00 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:53.753 11:11:00 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:53.753 11:11:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.753 11:11:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.753 11:11:00 -- common/autotest_common.sh@10 -- # set +x 00:07:53.753 ************************************ 00:07:53.753 START TEST spdk_dd 00:07:53.753 ************************************ 00:07:53.753 11:11:00 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:54.012 * Looking for test storage... 00:07:54.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:54.012 11:11:00 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:54.012 11:11:00 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:07:54.012 11:11:00 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:54.012 11:11:00 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:54.012 11:11:00 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.012 11:11:00 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:54.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.012 --rc genhtml_branch_coverage=1 00:07:54.012 --rc genhtml_function_coverage=1 00:07:54.012 --rc genhtml_legend=1 00:07:54.012 --rc geninfo_all_blocks=1 00:07:54.012 --rc geninfo_unexecuted_blocks=1 00:07:54.012 00:07:54.012 ' 00:07:54.012 11:11:00 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:54.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.012 --rc genhtml_branch_coverage=1 00:07:54.012 --rc genhtml_function_coverage=1 00:07:54.012 --rc genhtml_legend=1 00:07:54.012 --rc geninfo_all_blocks=1 00:07:54.012 --rc geninfo_unexecuted_blocks=1 00:07:54.012 00:07:54.012 ' 00:07:54.012 11:11:00 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:54.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.012 --rc genhtml_branch_coverage=1 00:07:54.012 --rc genhtml_function_coverage=1 00:07:54.012 --rc genhtml_legend=1 00:07:54.012 --rc geninfo_all_blocks=1 00:07:54.012 --rc geninfo_unexecuted_blocks=1 00:07:54.012 00:07:54.012 ' 00:07:54.012 11:11:00 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:54.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.012 --rc genhtml_branch_coverage=1 00:07:54.012 --rc genhtml_function_coverage=1 00:07:54.012 --rc genhtml_legend=1 00:07:54.012 --rc geninfo_all_blocks=1 00:07:54.012 --rc geninfo_unexecuted_blocks=1 00:07:54.012 00:07:54.012 ' 00:07:54.012 11:11:00 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.012 11:11:00 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.012 11:11:00 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.012 11:11:00 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.012 11:11:00 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.013 11:11:00 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:54.013 11:11:00 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.013 11:11:00 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:54.271 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:54.271 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:54.271 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:54.531 11:11:01 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:54.531 11:11:01 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:54.531 11:11:01 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:54.531 11:11:01 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:54.531 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:54.532 * spdk_dd linked to liburing 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:54.532 11:11:01 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:54.532 11:11:01 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:54.533 11:11:01 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:07:54.533 11:11:01 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:54.533 11:11:01 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:54.533 11:11:01 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:54.533 11:11:01 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:54.533 11:11:01 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:54.533 11:11:01 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:54.533 11:11:01 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:54.533 11:11:01 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.533 11:11:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:54.533 ************************************ 00:07:54.533 START TEST spdk_dd_basic_rw 00:07:54.533 ************************************ 00:07:54.533 11:11:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:54.533 * Looking for test storage... 00:07:54.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:54.533 11:11:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:54.533 11:11:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:07:54.533 11:11:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:54.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.792 --rc genhtml_branch_coverage=1 00:07:54.792 --rc genhtml_function_coverage=1 00:07:54.792 --rc genhtml_legend=1 00:07:54.792 --rc geninfo_all_blocks=1 00:07:54.792 --rc geninfo_unexecuted_blocks=1 00:07:54.792 00:07:54.792 ' 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:54.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.792 --rc genhtml_branch_coverage=1 00:07:54.792 --rc genhtml_function_coverage=1 00:07:54.792 --rc genhtml_legend=1 00:07:54.792 --rc geninfo_all_blocks=1 00:07:54.792 --rc geninfo_unexecuted_blocks=1 00:07:54.792 00:07:54.792 ' 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:54.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.792 --rc genhtml_branch_coverage=1 00:07:54.792 --rc genhtml_function_coverage=1 00:07:54.792 --rc genhtml_legend=1 00:07:54.792 --rc geninfo_all_blocks=1 00:07:54.792 --rc geninfo_unexecuted_blocks=1 00:07:54.792 00:07:54.792 ' 00:07:54.792 11:11:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:54.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.792 --rc genhtml_branch_coverage=1 00:07:54.792 --rc genhtml_function_coverage=1 00:07:54.793 --rc genhtml_legend=1 00:07:54.793 --rc geninfo_all_blocks=1 00:07:54.793 --rc geninfo_unexecuted_blocks=1 00:07:54.793 00:07:54.793 ' 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:54.793 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:55.054 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:55.054 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:55.055 ************************************ 00:07:55.055 START TEST dd_bs_lt_native_bs 00:07:55.055 ************************************ 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.055 11:11:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:55.055 { 00:07:55.055 "subsystems": [ 00:07:55.055 { 00:07:55.055 "subsystem": "bdev", 00:07:55.055 "config": [ 00:07:55.055 { 00:07:55.055 "params": { 00:07:55.055 "trtype": "pcie", 00:07:55.055 "traddr": "0000:00:10.0", 00:07:55.055 "name": "Nvme0" 00:07:55.055 }, 00:07:55.055 "method": "bdev_nvme_attach_controller" 00:07:55.055 }, 00:07:55.055 { 00:07:55.055 "method": "bdev_wait_for_examine" 00:07:55.055 } 00:07:55.055 ] 00:07:55.055 } 00:07:55.055 ] 00:07:55.055 } 00:07:55.055 [2024-12-10 11:11:01.833792] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:55.055 [2024-12-10 11:11:01.833939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61032 ] 00:07:55.314 [2024-12-10 11:11:02.022070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.572 [2024-12-10 11:11:02.151135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.572 [2024-12-10 11:11:02.369389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.831 [2024-12-10 11:11:02.570242] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:55.831 [2024-12-10 11:11:02.570327] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.398 [2024-12-10 11:11:03.118442] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.657 00:07:56.657 real 0m1.649s 00:07:56.657 user 0m1.392s 00:07:56.657 sys 0m0.201s 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:56.657 ************************************ 00:07:56.657 END TEST dd_bs_lt_native_bs 00:07:56.657 ************************************ 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:56.657 ************************************ 00:07:56.657 START TEST dd_rw 00:07:56.657 ************************************ 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:56.657 11:11:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:57.594 11:11:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:57.594 11:11:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:57.594 11:11:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:57.594 11:11:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:57.594 { 00:07:57.594 "subsystems": [ 00:07:57.594 { 00:07:57.594 "subsystem": "bdev", 00:07:57.594 "config": [ 00:07:57.594 { 00:07:57.594 "params": { 00:07:57.594 "trtype": "pcie", 00:07:57.594 "traddr": "0000:00:10.0", 00:07:57.594 "name": "Nvme0" 00:07:57.594 }, 00:07:57.594 "method": "bdev_nvme_attach_controller" 00:07:57.594 }, 00:07:57.594 { 00:07:57.594 "method": "bdev_wait_for_examine" 00:07:57.594 } 00:07:57.594 ] 00:07:57.594 } 00:07:57.594 ] 00:07:57.594 } 00:07:57.594 [2024-12-10 11:11:04.253139] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:57.594 [2024-12-10 11:11:04.253312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61075 ] 00:07:57.852 [2024-12-10 11:11:04.435027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.852 [2024-12-10 11:11:04.548032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.111 [2024-12-10 11:11:04.746969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.369  [2024-12-10T11:11:06.132Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:59.306 00:07:59.306 11:11:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:59.306 11:11:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:59.306 11:11:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:59.306 11:11:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:59.306 { 00:07:59.306 "subsystems": [ 00:07:59.306 { 00:07:59.306 "subsystem": "bdev", 00:07:59.306 "config": [ 00:07:59.306 { 00:07:59.306 "params": { 00:07:59.306 "trtype": "pcie", 00:07:59.306 "traddr": "0000:00:10.0", 00:07:59.306 "name": "Nvme0" 00:07:59.306 }, 00:07:59.306 "method": "bdev_nvme_attach_controller" 00:07:59.306 }, 00:07:59.306 { 00:07:59.306 "method": "bdev_wait_for_examine" 00:07:59.306 } 00:07:59.306 ] 00:07:59.306 } 00:07:59.306 ] 00:07:59.306 } 00:07:59.306 [2024-12-10 11:11:06.127342] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:07:59.306 [2024-12-10 11:11:06.127551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61106 ] 00:07:59.565 [2024-12-10 11:11:06.320474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.824 [2024-12-10 11:11:06.464932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.083 [2024-12-10 11:11:06.679715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.083  [2024-12-10T11:11:07.845Z] Copying: 60/60 [kB] (average 19 MBps) 00:08:01.019 00:08:01.019 11:11:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.019 11:11:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:01.019 11:11:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:01.019 11:11:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:01.019 11:11:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:01.019 11:11:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:01.019 11:11:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:01.019 11:11:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:01.019 11:11:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:01.019 11:11:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:01.019 11:11:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:01.019 { 00:08:01.019 "subsystems": [ 00:08:01.019 { 00:08:01.019 "subsystem": "bdev", 00:08:01.019 "config": [ 00:08:01.019 { 00:08:01.019 "params": { 00:08:01.019 "trtype": "pcie", 00:08:01.019 "traddr": "0000:00:10.0", 00:08:01.019 "name": "Nvme0" 00:08:01.019 }, 00:08:01.019 "method": "bdev_nvme_attach_controller" 00:08:01.019 }, 00:08:01.020 { 00:08:01.020 "method": "bdev_wait_for_examine" 00:08:01.020 } 00:08:01.020 ] 00:08:01.020 } 00:08:01.020 ] 00:08:01.020 } 00:08:01.279 [2024-12-10 11:11:07.848180] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:01.279 [2024-12-10 11:11:07.848373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61134 ] 00:08:01.279 [2024-12-10 11:11:08.033561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.537 [2024-12-10 11:11:08.146366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.537 [2024-12-10 11:11:08.338126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.795  [2024-12-10T11:11:09.999Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:03.173 00:08:03.173 11:11:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:03.173 11:11:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:03.173 11:11:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:03.173 11:11:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:03.173 11:11:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:03.173 11:11:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:03.173 11:11:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.740 11:11:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:03.740 11:11:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:03.740 11:11:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:03.740 11:11:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.740 { 00:08:03.740 "subsystems": [ 00:08:03.740 { 00:08:03.740 "subsystem": "bdev", 00:08:03.740 "config": [ 00:08:03.740 { 00:08:03.740 "params": { 00:08:03.740 "trtype": "pcie", 00:08:03.740 "traddr": "0000:00:10.0", 00:08:03.740 "name": "Nvme0" 00:08:03.740 }, 00:08:03.740 "method": "bdev_nvme_attach_controller" 00:08:03.740 }, 00:08:03.740 { 00:08:03.740 "method": "bdev_wait_for_examine" 00:08:03.740 } 00:08:03.740 ] 00:08:03.740 } 00:08:03.740 ] 00:08:03.740 } 00:08:03.740 [2024-12-10 11:11:10.405388] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:03.740 [2024-12-10 11:11:10.405937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61170 ] 00:08:03.998 [2024-12-10 11:11:10.589584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.998 [2024-12-10 11:11:10.715687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.256 [2024-12-10 11:11:10.926459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.513  [2024-12-10T11:11:12.274Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:05.448 00:08:05.448 11:11:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:05.448 11:11:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:05.448 11:11:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:05.448 11:11:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 { 00:08:05.448 "subsystems": [ 00:08:05.448 { 00:08:05.448 "subsystem": "bdev", 00:08:05.448 "config": [ 00:08:05.448 { 00:08:05.448 "params": { 00:08:05.448 "trtype": "pcie", 00:08:05.448 "traddr": "0000:00:10.0", 00:08:05.448 "name": "Nvme0" 00:08:05.448 }, 00:08:05.448 "method": "bdev_nvme_attach_controller" 00:08:05.448 }, 00:08:05.448 { 00:08:05.448 "method": "bdev_wait_for_examine" 00:08:05.448 } 00:08:05.448 ] 00:08:05.448 } 00:08:05.448 ] 00:08:05.448 } 00:08:05.448 [2024-12-10 11:11:12.042773] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:05.448 [2024-12-10 11:11:12.042962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61196 ] 00:08:05.448 [2024-12-10 11:11:12.230131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.706 [2024-12-10 11:11:12.380516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.965 [2024-12-10 11:11:12.597666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.965  [2024-12-10T11:11:14.166Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:07.340 00:08:07.340 11:11:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.340 11:11:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:07.340 11:11:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:07.340 11:11:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:07.340 11:11:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:07.340 11:11:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:07.340 11:11:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:07.340 11:11:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:07.340 11:11:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:07.340 11:11:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:07.340 11:11:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:07.340 { 00:08:07.340 "subsystems": [ 00:08:07.340 { 00:08:07.340 "subsystem": "bdev", 00:08:07.340 "config": [ 00:08:07.340 { 00:08:07.340 "params": { 00:08:07.340 "trtype": "pcie", 00:08:07.340 "traddr": "0000:00:10.0", 00:08:07.340 "name": "Nvme0" 00:08:07.340 }, 00:08:07.340 "method": "bdev_nvme_attach_controller" 00:08:07.340 }, 00:08:07.340 { 00:08:07.340 "method": "bdev_wait_for_examine" 00:08:07.340 } 00:08:07.340 ] 00:08:07.340 } 00:08:07.340 ] 00:08:07.340 } 00:08:07.340 [2024-12-10 11:11:13.916225] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:07.340 [2024-12-10 11:11:13.916398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61229 ] 00:08:07.340 [2024-12-10 11:11:14.099795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.598 [2024-12-10 11:11:14.212632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.598 [2024-12-10 11:11:14.404751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.856  [2024-12-10T11:11:15.617Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:08.791 00:08:08.791 11:11:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:08.791 11:11:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:08.791 11:11:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:08.791 11:11:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:08.791 11:11:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:08.791 11:11:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:08.791 11:11:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:08.791 11:11:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:09.358 11:11:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:09.358 11:11:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:09.358 11:11:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:09.358 11:11:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:09.358 { 00:08:09.358 "subsystems": [ 00:08:09.358 { 00:08:09.358 "subsystem": "bdev", 00:08:09.358 "config": [ 00:08:09.358 { 00:08:09.358 "params": { 00:08:09.358 "trtype": "pcie", 00:08:09.358 "traddr": "0000:00:10.0", 00:08:09.358 "name": "Nvme0" 00:08:09.358 }, 00:08:09.358 "method": "bdev_nvme_attach_controller" 00:08:09.358 }, 00:08:09.358 { 00:08:09.358 "method": "bdev_wait_for_examine" 00:08:09.358 } 00:08:09.358 ] 00:08:09.358 } 00:08:09.358 ] 00:08:09.358 } 00:08:09.616 [2024-12-10 11:11:16.211752] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:09.616 [2024-12-10 11:11:16.211936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61260 ] 00:08:09.616 [2024-12-10 11:11:16.401788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.875 [2024-12-10 11:11:16.536379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.134 [2024-12-10 11:11:16.765502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.392  [2024-12-10T11:11:18.154Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:11.328 00:08:11.329 11:11:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:11.329 11:11:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:11.329 11:11:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:11.329 11:11:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.329 { 00:08:11.329 "subsystems": [ 00:08:11.329 { 00:08:11.329 "subsystem": "bdev", 00:08:11.329 "config": [ 00:08:11.329 { 00:08:11.329 "params": { 00:08:11.329 "trtype": "pcie", 00:08:11.329 "traddr": "0000:00:10.0", 00:08:11.329 "name": "Nvme0" 00:08:11.329 }, 00:08:11.329 "method": "bdev_nvme_attach_controller" 00:08:11.329 }, 00:08:11.329 { 00:08:11.329 "method": "bdev_wait_for_examine" 00:08:11.329 } 00:08:11.329 ] 00:08:11.329 } 00:08:11.329 ] 00:08:11.329 } 00:08:11.329 [2024-12-10 11:11:18.110065] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:11.329 [2024-12-10 11:11:18.110465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61291 ] 00:08:11.587 [2024-12-10 11:11:18.304668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.846 [2024-12-10 11:11:18.432921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.846 [2024-12-10 11:11:18.655704] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.105  [2024-12-10T11:11:19.868Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:13.042 00:08:13.042 11:11:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.042 11:11:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:13.042 11:11:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:13.042 11:11:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:13.042 11:11:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:13.042 11:11:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:13.042 11:11:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:13.042 11:11:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:13.042 11:11:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:13.042 11:11:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:13.042 11:11:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:13.042 { 00:08:13.042 "subsystems": [ 00:08:13.042 { 00:08:13.042 "subsystem": "bdev", 00:08:13.042 "config": [ 00:08:13.042 { 00:08:13.042 "params": { 00:08:13.042 "trtype": "pcie", 00:08:13.042 "traddr": "0000:00:10.0", 00:08:13.042 "name": "Nvme0" 00:08:13.042 }, 00:08:13.042 "method": "bdev_nvme_attach_controller" 00:08:13.042 }, 00:08:13.042 { 00:08:13.042 "method": "bdev_wait_for_examine" 00:08:13.042 } 00:08:13.042 ] 00:08:13.042 } 00:08:13.042 ] 00:08:13.042 } 00:08:13.042 [2024-12-10 11:11:19.760142] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:13.042 [2024-12-10 11:11:19.760298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61318 ] 00:08:13.304 [2024-12-10 11:11:19.938754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.304 [2024-12-10 11:11:20.063938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.563 [2024-12-10 11:11:20.283200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.822  [2024-12-10T11:11:21.583Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:14.757 00:08:14.757 11:11:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:14.757 11:11:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:14.757 11:11:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:14.757 11:11:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:14.757 11:11:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:14.757 11:11:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:14.757 11:11:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:15.694 11:11:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:15.694 11:11:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:15.694 11:11:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:15.694 11:11:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:15.694 { 00:08:15.694 "subsystems": [ 00:08:15.694 { 00:08:15.694 "subsystem": "bdev", 00:08:15.694 "config": [ 00:08:15.694 { 00:08:15.694 "params": { 00:08:15.694 "trtype": "pcie", 00:08:15.694 "traddr": "0000:00:10.0", 00:08:15.694 "name": "Nvme0" 00:08:15.694 }, 00:08:15.694 "method": "bdev_nvme_attach_controller" 00:08:15.694 }, 00:08:15.694 { 00:08:15.694 "method": "bdev_wait_for_examine" 00:08:15.694 } 00:08:15.694 ] 00:08:15.694 } 00:08:15.694 ] 00:08:15.694 } 00:08:15.694 [2024-12-10 11:11:22.270319] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:15.694 [2024-12-10 11:11:22.270729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61355 ] 00:08:15.694 [2024-12-10 11:11:22.461382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.953 [2024-12-10 11:11:22.603253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.212 [2024-12-10 11:11:22.823883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.212  [2024-12-10T11:11:23.975Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:17.149 00:08:17.149 11:11:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:17.149 11:11:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:17.149 11:11:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:17.149 11:11:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:17.149 { 00:08:17.149 "subsystems": [ 00:08:17.149 { 00:08:17.149 "subsystem": "bdev", 00:08:17.149 "config": [ 00:08:17.149 { 00:08:17.149 "params": { 00:08:17.149 "trtype": "pcie", 00:08:17.149 "traddr": "0000:00:10.0", 00:08:17.149 "name": "Nvme0" 00:08:17.149 }, 00:08:17.149 "method": "bdev_nvme_attach_controller" 00:08:17.149 }, 00:08:17.150 { 00:08:17.150 "method": "bdev_wait_for_examine" 00:08:17.150 } 00:08:17.150 ] 00:08:17.150 } 00:08:17.150 ] 00:08:17.150 } 00:08:17.150 [2024-12-10 11:11:23.966156] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:17.150 [2024-12-10 11:11:23.966334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61380 ] 00:08:17.409 [2024-12-10 11:11:24.155459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.668 [2024-12-10 11:11:24.284469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.668 [2024-12-10 11:11:24.492247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.927  [2024-12-10T11:11:26.128Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:19.302 00:08:19.302 11:11:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.302 11:11:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:19.302 11:11:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:19.302 11:11:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:19.302 11:11:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:19.303 11:11:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:19.303 11:11:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:19.303 11:11:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:19.303 11:11:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:19.303 11:11:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:19.303 11:11:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:19.303 { 00:08:19.303 "subsystems": [ 00:08:19.303 { 00:08:19.303 "subsystem": "bdev", 00:08:19.303 "config": [ 00:08:19.303 { 00:08:19.303 "params": { 00:08:19.303 "trtype": "pcie", 00:08:19.303 "traddr": "0000:00:10.0", 00:08:19.303 "name": "Nvme0" 00:08:19.303 }, 00:08:19.303 "method": "bdev_nvme_attach_controller" 00:08:19.303 }, 00:08:19.303 { 00:08:19.303 "method": "bdev_wait_for_examine" 00:08:19.303 } 00:08:19.303 ] 00:08:19.303 } 00:08:19.303 ] 00:08:19.303 } 00:08:19.303 [2024-12-10 11:11:25.825715] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:19.303 [2024-12-10 11:11:25.825901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61408 ] 00:08:19.303 [2024-12-10 11:11:26.010821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.561 [2024-12-10 11:11:26.135879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.561 [2024-12-10 11:11:26.323066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.820  [2024-12-10T11:11:27.580Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:20.754 00:08:20.754 11:11:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:20.754 11:11:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:20.754 11:11:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:20.754 11:11:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:20.754 11:11:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:20.754 11:11:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:20.754 11:11:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:20.754 11:11:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:21.320 11:11:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:21.320 11:11:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:21.320 11:11:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:21.320 11:11:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:21.320 { 00:08:21.320 "subsystems": [ 00:08:21.320 { 00:08:21.320 "subsystem": "bdev", 00:08:21.320 "config": [ 00:08:21.320 { 00:08:21.320 "params": { 00:08:21.320 "trtype": "pcie", 00:08:21.320 "traddr": "0000:00:10.0", 00:08:21.320 "name": "Nvme0" 00:08:21.320 }, 00:08:21.320 "method": "bdev_nvme_attach_controller" 00:08:21.320 }, 00:08:21.320 { 00:08:21.320 "method": "bdev_wait_for_examine" 00:08:21.320 } 00:08:21.320 ] 00:08:21.320 } 00:08:21.320 ] 00:08:21.320 } 00:08:21.320 [2024-12-10 11:11:28.008145] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:21.320 [2024-12-10 11:11:28.008407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61439 ] 00:08:21.579 [2024-12-10 11:11:28.189173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.579 [2024-12-10 11:11:28.298551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.837 [2024-12-10 11:11:28.497020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.095  [2024-12-10T11:11:29.856Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:23.030 00:08:23.030 11:11:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:23.030 11:11:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:23.030 11:11:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:23.030 11:11:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:23.030 { 00:08:23.030 "subsystems": [ 00:08:23.030 { 00:08:23.030 "subsystem": "bdev", 00:08:23.030 "config": [ 00:08:23.030 { 00:08:23.030 "params": { 00:08:23.030 "trtype": "pcie", 00:08:23.030 "traddr": "0000:00:10.0", 00:08:23.030 "name": "Nvme0" 00:08:23.030 }, 00:08:23.030 "method": "bdev_nvme_attach_controller" 00:08:23.030 }, 00:08:23.030 { 00:08:23.030 "method": "bdev_wait_for_examine" 00:08:23.030 } 00:08:23.030 ] 00:08:23.030 } 00:08:23.030 ] 00:08:23.030 } 00:08:23.030 [2024-12-10 11:11:29.829133] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:23.030 [2024-12-10 11:11:29.829536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61470 ] 00:08:23.289 [2024-12-10 11:11:30.002171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.289 [2024-12-10 11:11:30.105094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.548 [2024-12-10 11:11:30.284747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.806  [2024-12-10T11:11:31.567Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:24.741 00:08:24.741 11:11:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:24.741 11:11:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:24.741 11:11:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:24.741 11:11:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:24.741 11:11:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:24.741 11:11:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:24.741 11:11:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:24.741 11:11:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:24.741 11:11:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:24.741 11:11:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:24.741 11:11:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:24.741 { 00:08:24.741 "subsystems": [ 00:08:24.741 { 00:08:24.741 "subsystem": "bdev", 00:08:24.741 "config": [ 00:08:24.741 { 00:08:24.741 "params": { 00:08:24.741 "trtype": "pcie", 00:08:24.741 "traddr": "0000:00:10.0", 00:08:24.741 "name": "Nvme0" 00:08:24.741 }, 00:08:24.741 "method": "bdev_nvme_attach_controller" 00:08:24.741 }, 00:08:24.741 { 00:08:24.741 "method": "bdev_wait_for_examine" 00:08:24.741 } 00:08:24.741 ] 00:08:24.741 } 00:08:24.741 ] 00:08:24.741 } 00:08:24.741 [2024-12-10 11:11:31.400593] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:24.741 [2024-12-10 11:11:31.400965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61503 ] 00:08:25.000 [2024-12-10 11:11:31.579182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.001 [2024-12-10 11:11:31.708954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.259 [2024-12-10 11:11:31.921220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.518  [2024-12-10T11:11:33.279Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:26.453 00:08:26.453 11:11:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:26.453 11:11:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:26.453 11:11:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:26.453 11:11:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:26.453 11:11:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:26.453 11:11:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:26.453 11:11:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:27.020 11:11:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:27.020 11:11:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:27.020 11:11:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:27.020 11:11:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:27.020 { 00:08:27.020 "subsystems": [ 00:08:27.020 { 00:08:27.020 "subsystem": "bdev", 00:08:27.020 "config": [ 00:08:27.020 { 00:08:27.020 "params": { 00:08:27.020 "trtype": "pcie", 00:08:27.020 "traddr": "0000:00:10.0", 00:08:27.020 "name": "Nvme0" 00:08:27.020 }, 00:08:27.020 "method": "bdev_nvme_attach_controller" 00:08:27.020 }, 00:08:27.020 { 00:08:27.020 "method": "bdev_wait_for_examine" 00:08:27.020 } 00:08:27.020 ] 00:08:27.020 } 00:08:27.020 ] 00:08:27.020 } 00:08:27.020 [2024-12-10 11:11:33.790442] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:27.020 [2024-12-10 11:11:33.790621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61534 ] 00:08:27.278 [2024-12-10 11:11:33.972669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.278 [2024-12-10 11:11:34.076981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.536 [2024-12-10 11:11:34.259318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.794  [2024-12-10T11:11:35.555Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:28.729 00:08:28.729 11:11:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:28.729 11:11:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:28.729 11:11:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:28.729 11:11:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:28.729 { 00:08:28.729 "subsystems": [ 00:08:28.729 { 00:08:28.729 "subsystem": "bdev", 00:08:28.729 "config": [ 00:08:28.729 { 00:08:28.729 "params": { 00:08:28.729 "trtype": "pcie", 00:08:28.729 "traddr": "0000:00:10.0", 00:08:28.729 "name": "Nvme0" 00:08:28.729 }, 00:08:28.729 "method": "bdev_nvme_attach_controller" 00:08:28.729 }, 00:08:28.729 { 00:08:28.729 "method": "bdev_wait_for_examine" 00:08:28.729 } 00:08:28.729 ] 00:08:28.729 } 00:08:28.729 ] 00:08:28.729 } 00:08:28.729 [2024-12-10 11:11:35.404580] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:28.729 [2024-12-10 11:11:35.404738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61560 ] 00:08:28.988 [2024-12-10 11:11:35.579116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.988 [2024-12-10 11:11:35.682479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.246 [2024-12-10 11:11:35.874357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.246  [2024-12-10T11:11:37.457Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:30.632 00:08:30.632 11:11:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:30.632 11:11:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:30.632 11:11:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:30.632 11:11:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:30.632 11:11:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:30.632 11:11:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:30.632 11:11:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:30.632 11:11:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:30.632 11:11:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:30.632 11:11:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:30.632 11:11:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:30.632 { 00:08:30.632 "subsystems": [ 00:08:30.632 { 00:08:30.632 "subsystem": "bdev", 00:08:30.632 "config": [ 00:08:30.632 { 00:08:30.632 "params": { 00:08:30.632 "trtype": "pcie", 00:08:30.632 "traddr": "0000:00:10.0", 00:08:30.632 "name": "Nvme0" 00:08:30.632 }, 00:08:30.632 "method": "bdev_nvme_attach_controller" 00:08:30.632 }, 00:08:30.632 { 00:08:30.632 "method": "bdev_wait_for_examine" 00:08:30.632 } 00:08:30.632 ] 00:08:30.632 } 00:08:30.632 ] 00:08:30.632 } 00:08:30.632 [2024-12-10 11:11:37.189675] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:30.632 [2024-12-10 11:11:37.189826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61587 ] 00:08:30.632 [2024-12-10 11:11:37.366232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.903 [2024-12-10 11:11:37.474115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.903 [2024-12-10 11:11:37.655074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.161  [2024-12-10T11:11:38.922Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:32.096 00:08:32.096 ************************************ 00:08:32.096 END TEST dd_rw 00:08:32.096 ************************************ 00:08:32.096 00:08:32.096 real 0m35.187s 00:08:32.096 user 0m29.892s 00:08:32.097 sys 0m16.080s 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:32.097 ************************************ 00:08:32.097 START TEST dd_rw_offset 00:08:32.097 ************************************ 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=u2tznkq6tclkjisvtbau64x2yf78l2go0eljoxxq0fday9nhd7u14pqxgvmjhxcctv1icbj0be6rzevppbmx8cq2wlhea2gf8uo9viq2x1jwrjfh30gd6bv1jmkifqlzcfbgwhi30x3g0l7150g38uc8zm4hd9d444u4jwnrog8e2w4tdsrobd9ml67nrcfvutgevy28eizpio1h75bh78rhsqcio636iyips6vve1das42g33kg4dog6lmwf8czgrwsc29jyvhmk2qu4dp4u2nr2j13q85sq5atzoi7apznvkp7cq8p1ykynyp37fm5eiunxwb973bhb48vnvphhsb39kpqflvec19mm4boz4u3cb7vvjykfroyrmcunmygd8bu7h1j2zbzn3jtimbnd2fooeqh05x0f5v8q66i2ua58e6cmtg479o9qyhj4m602woa280jt5cf1s70s5i0jtcbpcvrirl0y48j4ae32p7q57nbs1npp2zj7w731fgyo1fnou05pydijiik85yn3zuzzrkvuwp1054le0y6t6gp1mtw29ccta0ohli7hb6poclcib7kk515efc1tgfynas3d3pzm5olxzsihzdgfmuthexh08huxlyjefi4f6cumew2m0sqznabggilakllceqon8ay4kkx7eibitt2mfiuk8438nstubswg7ecaj45xn9f7us0bwo7mautldt3dlvxnccttgy5rosk4ylfroqn2aein0y0hfi1q4j18qu8k8z5lnnndckq6fxgwuicb8owrgjqgro8q4uqujcg7vtvmzz9mh9gi3i18siqmypre2qti4z0xs3w5j64ff41fs3e65whq519geptmzystx55uxkzre6kp0v97ld1bdnvx1o2quekyhu79d4y700rwmnt6yjzk42qar294d64o4ve46rgeb44frn4z3fb8i3okpvlx7lep1u5jnk8tlrkwov0svzx59q6ileab8ad5stjs0wcbnm56coq706rpre8d0r5lu7oq5kotlnbqflit94ns628e7eg9sfgxzyc0ho9hqnxz3urmo7zugwgkppcr1nw5n7y5jfiq959hdw63l19l6gkj3qbyy59ptyran3ys43uz4wls7f0n7wj3olmjsfz7delqik0hk5yzts8s8mvrwd646tva6q29dviz2fdcflytw4yr9yh7t9v5zx03clb99ajiqrlcvaeigbjfm80c9eq4fnq1y1c9vwco5657mcmkf9vvn3my2g2zab8mcjri6d49f7oxjcreakm8sijgf94d2jr29gvrdm5927ghlqxz9dpuhbnt5eyxvfxa4s88uqxzpvzm7qzw8u28rj7tqek251vkr1s10gp4i1ym7qzpmqwtu1s1b6z8c8rifl91ngo5o4yqllpbll4jvkd501blfnsrqui890l7qi1ucz3d0dfixlzpe1rau5syvuzuw4c7bqr47t3tpoags59hiwhsehfvqr6cvvq0v6xyrhaa51j0yich3ajrh2fkij5bht2kqrgih042c10t3ztsm9auwblwy48k0whb26eocoxhyxi2jeupmbraczkcptkgxutl9xqgng8628u48iscnukm0vn3pv706ukdcml9gbujaf6ved1n6lucxfndpidiufbn6zs83kosxcus251r73atn6kuxsrhykooam7bgk2ka15xc3gdwxtacqm6ezzw6sum4tacihvtlb57ennvvrsmye8qexc3sfei5di2xv7pvqu9478gcjmaaots5vv2jhu8lugl32llcl4hmcta0fpfhzevq446z8ol5g7j5jsf8bsh8qhdlg1w7dhmjh90uoo6d7n9ncdt17tqh1oexcccp8fsev8t7vhe7jqm68lovlw8bs34um8yl5am3fkdeed1wuknjphvs6ydunffmmqb5j6rnqn24lur4rmopbkfvrssf3wcz4b8d7xtbv27nonq6sy468bpclsabhwpj0p7frr58b01hypiglaz1pewv31zxpgdu8p98zn1tpcyj578klb1oqkcsrrgsbejj949tjrergrq0m9ylosskcken4szi1yl8aqto2fq6z71j67gbpaic7o96fn6ztwbk0nx1mg40lcl71af1sb0ynfwc1dpd2nd0pej5j0f5mz4576l3eapos7c9ycfm3ci7q34xk24kd74og7by3p7fu7rl6ba0rut0ko88eih5lrh98b24mzc3nh8cqdsr7glbunz3t7hcqa7clgdikq2hp3bv3f91bd01dnjefk8panlphc724t7a3c6sbf1c0zg2ej86k2pz72t2frkyzwqscfka4udowaici76z7dix0sr5k0tdhl74i1r12it0rb6ntiyc6bvu3b1u7h1f94ftnsk2liyaafrhrfcyk476xi406kpnxko41hcpn2lokvhdow7mduwdfocv25l5wnarqawryzxlxk738le8b38r51vtda143lslqsd9sdkocsbnjtzhlqcw6qt5lejfrqfe11za6ws63f7gy1npfcq27cgewcnq3ln6vjr6ai7xc8ct0w370qhmvmg396hqt4bgqi97rd7pltfeuyi52g2gmkvd7otuwvzq58jqrc1vyosfprbjitqlfqqaztpuows79k0xxcupnbsdg31ozx86w6qcfsgjz6lwjb76ecv4prxunx3e8e7svip3qju4vah8cqicq0e4ql0b3r43ntkpcho9i2eeia5tqzn329xig6x7f49g5ykny0y3qwtxt1xmivp1a818x4ke1wbp4boc0xr694huhdu3hl6ukdqtuc1gp2lhqbmb8trcp16suwnpkk63gga31wtcluj4pua2numkirwf6dawuuf4all5d5ipa21tyt9gv2yhx3uaixo7rh6ncymtsj0blqi0dqxfupq8xesc1q7676leorewdo3082j0262jir6xzgl9qvrtgyydt4icle52afpg1xuexd0of46ulc86dmilbp2or98l0r4mepkjis4ubzvauqdf7dirwlqq8k8cdxub8q4o5or4f3nzi525b18hngedx5imwaeue772zfh5nt40ik7woeyxd6dfc3i7wmwd5aeawyjd1y8se12nnmdbpvvecscnph88bjjw2q1qnhn8jzdudjssdyo8444sck9lp8iakepg15t82ifpkuiwbtekugxq1glzd70ej3onsan2epduacn08mrb2xzjaun9z0nmhcx978pc5k0vpsuuqv5hon4k4hgvaz7ofi9dax8swcldom3phxsdx3fvoe95suno7c7yv7f4toiut2fn1gcqumylavpvruqsupi0tqrfwivjs38gajdxhvslx7yz2dgt6g36ho05tnrwbj9i2vgqkt7dj2cuzjc4xvq9jlxa0rn8bnor6yg7blhwyhhnyh05dnphvuq15zvpz9ai1nbpmwmbeg10farfzul8ga7geusl4u4z4trlo7s0pmksisk4rrft08ovmxyn5lnnc28fcy70bjc5f40be83u0ecr2vgdkvouk30bbbfh5je370s3ql1dd3u5cqrsmkbtpzpdj0zebufw824h3fl6ux6ybqhduyk944dsdk2q5tnxot7ekuj596g51m583ccmp5jga91jsuw7kp96ilrwzaewrwhdm7llpzzj6gqdshxe37tlv355gxbssu0x86mhy9oqgbkcniburpw0t61bqzqysfglvvn7cxwpdiwmqo5gzdc1qw3r7ijxoqcule5a26ar9fgxr4ckoyjiacm8zdsifmht0yez4wxohj1mrvup5jt6dgsa1v013ow2skzrd375ltew31fsmwav6i1katx0qp76c0rz8pfag8hefo9k3esm1yiion7odalt7v5rehwl79b2fzq4m9aixhjm3abfvrmu011a12w7tio3uagghrzzrafuj1gx4tx1swu978l5kffd2hpj7h7lumongs39u0nqtprpo7yoyti1tadw99a8szrr1bmcdwl9em1fwsbpxsbfr8ksnzu7tk1dh18toahbcovarajmte6wj0ortz 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:32.097 11:11:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:32.097 { 00:08:32.097 "subsystems": [ 00:08:32.097 { 00:08:32.097 "subsystem": "bdev", 00:08:32.097 "config": [ 00:08:32.097 { 00:08:32.097 "params": { 00:08:32.097 "trtype": "pcie", 00:08:32.097 "traddr": "0000:00:10.0", 00:08:32.097 "name": "Nvme0" 00:08:32.097 }, 00:08:32.097 "method": "bdev_nvme_attach_controller" 00:08:32.097 }, 00:08:32.097 { 00:08:32.097 "method": "bdev_wait_for_examine" 00:08:32.097 } 00:08:32.097 ] 00:08:32.097 } 00:08:32.097 ] 00:08:32.097 } 00:08:32.097 [2024-12-10 11:11:38.828339] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:32.097 [2024-12-10 11:11:38.828502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61635 ] 00:08:32.356 [2024-12-10 11:11:39.007304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.356 [2024-12-10 11:11:39.133658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.615 [2024-12-10 11:11:39.359708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.873  [2024-12-10T11:11:40.635Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:33.809 00:08:33.809 11:11:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:33.809 11:11:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:33.809 11:11:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:33.809 11:11:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:33.809 { 00:08:33.809 "subsystems": [ 00:08:33.809 { 00:08:33.809 "subsystem": "bdev", 00:08:33.809 "config": [ 00:08:33.809 { 00:08:33.809 "params": { 00:08:33.809 "trtype": "pcie", 00:08:33.809 "traddr": "0000:00:10.0", 00:08:33.809 "name": "Nvme0" 00:08:33.809 }, 00:08:33.809 "method": "bdev_nvme_attach_controller" 00:08:33.809 }, 00:08:33.809 { 00:08:33.809 "method": "bdev_wait_for_examine" 00:08:33.809 } 00:08:33.809 ] 00:08:33.809 } 00:08:33.809 ] 00:08:33.809 } 00:08:34.068 [2024-12-10 11:11:40.642344] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:34.068 [2024-12-10 11:11:40.642709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61661 ] 00:08:34.068 [2024-12-10 11:11:40.824267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.326 [2024-12-10 11:11:40.949250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.584 [2024-12-10 11:11:41.158896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.584  [2024-12-10T11:11:42.345Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:35.519 00:08:35.519 11:11:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:35.519 ************************************ 00:08:35.519 END TEST dd_rw_offset 00:08:35.519 ************************************ 00:08:35.519 11:11:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ u2tznkq6tclkjisvtbau64x2yf78l2go0eljoxxq0fday9nhd7u14pqxgvmjhxcctv1icbj0be6rzevppbmx8cq2wlhea2gf8uo9viq2x1jwrjfh30gd6bv1jmkifqlzcfbgwhi30x3g0l7150g38uc8zm4hd9d444u4jwnrog8e2w4tdsrobd9ml67nrcfvutgevy28eizpio1h75bh78rhsqcio636iyips6vve1das42g33kg4dog6lmwf8czgrwsc29jyvhmk2qu4dp4u2nr2j13q85sq5atzoi7apznvkp7cq8p1ykynyp37fm5eiunxwb973bhb48vnvphhsb39kpqflvec19mm4boz4u3cb7vvjykfroyrmcunmygd8bu7h1j2zbzn3jtimbnd2fooeqh05x0f5v8q66i2ua58e6cmtg479o9qyhj4m602woa280jt5cf1s70s5i0jtcbpcvrirl0y48j4ae32p7q57nbs1npp2zj7w731fgyo1fnou05pydijiik85yn3zuzzrkvuwp1054le0y6t6gp1mtw29ccta0ohli7hb6poclcib7kk515efc1tgfynas3d3pzm5olxzsihzdgfmuthexh08huxlyjefi4f6cumew2m0sqznabggilakllceqon8ay4kkx7eibitt2mfiuk8438nstubswg7ecaj45xn9f7us0bwo7mautldt3dlvxnccttgy5rosk4ylfroqn2aein0y0hfi1q4j18qu8k8z5lnnndckq6fxgwuicb8owrgjqgro8q4uqujcg7vtvmzz9mh9gi3i18siqmypre2qti4z0xs3w5j64ff41fs3e65whq519geptmzystx55uxkzre6kp0v97ld1bdnvx1o2quekyhu79d4y700rwmnt6yjzk42qar294d64o4ve46rgeb44frn4z3fb8i3okpvlx7lep1u5jnk8tlrkwov0svzx59q6ileab8ad5stjs0wcbnm56coq706rpre8d0r5lu7oq5kotlnbqflit94ns628e7eg9sfgxzyc0ho9hqnxz3urmo7zugwgkppcr1nw5n7y5jfiq959hdw63l19l6gkj3qbyy59ptyran3ys43uz4wls7f0n7wj3olmjsfz7delqik0hk5yzts8s8mvrwd646tva6q29dviz2fdcflytw4yr9yh7t9v5zx03clb99ajiqrlcvaeigbjfm80c9eq4fnq1y1c9vwco5657mcmkf9vvn3my2g2zab8mcjri6d49f7oxjcreakm8sijgf94d2jr29gvrdm5927ghlqxz9dpuhbnt5eyxvfxa4s88uqxzpvzm7qzw8u28rj7tqek251vkr1s10gp4i1ym7qzpmqwtu1s1b6z8c8rifl91ngo5o4yqllpbll4jvkd501blfnsrqui890l7qi1ucz3d0dfixlzpe1rau5syvuzuw4c7bqr47t3tpoags59hiwhsehfvqr6cvvq0v6xyrhaa51j0yich3ajrh2fkij5bht2kqrgih042c10t3ztsm9auwblwy48k0whb26eocoxhyxi2jeupmbraczkcptkgxutl9xqgng8628u48iscnukm0vn3pv706ukdcml9gbujaf6ved1n6lucxfndpidiufbn6zs83kosxcus251r73atn6kuxsrhykooam7bgk2ka15xc3gdwxtacqm6ezzw6sum4tacihvtlb57ennvvrsmye8qexc3sfei5di2xv7pvqu9478gcjmaaots5vv2jhu8lugl32llcl4hmcta0fpfhzevq446z8ol5g7j5jsf8bsh8qhdlg1w7dhmjh90uoo6d7n9ncdt17tqh1oexcccp8fsev8t7vhe7jqm68lovlw8bs34um8yl5am3fkdeed1wuknjphvs6ydunffmmqb5j6rnqn24lur4rmopbkfvrssf3wcz4b8d7xtbv27nonq6sy468bpclsabhwpj0p7frr58b01hypiglaz1pewv31zxpgdu8p98zn1tpcyj578klb1oqkcsrrgsbejj949tjrergrq0m9ylosskcken4szi1yl8aqto2fq6z71j67gbpaic7o96fn6ztwbk0nx1mg40lcl71af1sb0ynfwc1dpd2nd0pej5j0f5mz4576l3eapos7c9ycfm3ci7q34xk24kd74og7by3p7fu7rl6ba0rut0ko88eih5lrh98b24mzc3nh8cqdsr7glbunz3t7hcqa7clgdikq2hp3bv3f91bd01dnjefk8panlphc724t7a3c6sbf1c0zg2ej86k2pz72t2frkyzwqscfka4udowaici76z7dix0sr5k0tdhl74i1r12it0rb6ntiyc6bvu3b1u7h1f94ftnsk2liyaafrhrfcyk476xi406kpnxko41hcpn2lokvhdow7mduwdfocv25l5wnarqawryzxlxk738le8b38r51vtda143lslqsd9sdkocsbnjtzhlqcw6qt5lejfrqfe11za6ws63f7gy1npfcq27cgewcnq3ln6vjr6ai7xc8ct0w370qhmvmg396hqt4bgqi97rd7pltfeuyi52g2gmkvd7otuwvzq58jqrc1vyosfprbjitqlfqqaztpuows79k0xxcupnbsdg31ozx86w6qcfsgjz6lwjb76ecv4prxunx3e8e7svip3qju4vah8cqicq0e4ql0b3r43ntkpcho9i2eeia5tqzn329xig6x7f49g5ykny0y3qwtxt1xmivp1a818x4ke1wbp4boc0xr694huhdu3hl6ukdqtuc1gp2lhqbmb8trcp16suwnpkk63gga31wtcluj4pua2numkirwf6dawuuf4all5d5ipa21tyt9gv2yhx3uaixo7rh6ncymtsj0blqi0dqxfupq8xesc1q7676leorewdo3082j0262jir6xzgl9qvrtgyydt4icle52afpg1xuexd0of46ulc86dmilbp2or98l0r4mepkjis4ubzvauqdf7dirwlqq8k8cdxub8q4o5or4f3nzi525b18hngedx5imwaeue772zfh5nt40ik7woeyxd6dfc3i7wmwd5aeawyjd1y8se12nnmdbpvvecscnph88bjjw2q1qnhn8jzdudjssdyo8444sck9lp8iakepg15t82ifpkuiwbtekugxq1glzd70ej3onsan2epduacn08mrb2xzjaun9z0nmhcx978pc5k0vpsuuqv5hon4k4hgvaz7ofi9dax8swcldom3phxsdx3fvoe95suno7c7yv7f4toiut2fn1gcqumylavpvruqsupi0tqrfwivjs38gajdxhvslx7yz2dgt6g36ho05tnrwbj9i2vgqkt7dj2cuzjc4xvq9jlxa0rn8bnor6yg7blhwyhhnyh05dnphvuq15zvpz9ai1nbpmwmbeg10farfzul8ga7geusl4u4z4trlo7s0pmksisk4rrft08ovmxyn5lnnc28fcy70bjc5f40be83u0ecr2vgdkvouk30bbbfh5je370s3ql1dd3u5cqrsmkbtpzpdj0zebufw824h3fl6ux6ybqhduyk944dsdk2q5tnxot7ekuj596g51m583ccmp5jga91jsuw7kp96ilrwzaewrwhdm7llpzzj6gqdshxe37tlv355gxbssu0x86mhy9oqgbkcniburpw0t61bqzqysfglvvn7cxwpdiwmqo5gzdc1qw3r7ijxoqcule5a26ar9fgxr4ckoyjiacm8zdsifmht0yez4wxohj1mrvup5jt6dgsa1v013ow2skzrd375ltew31fsmwav6i1katx0qp76c0rz8pfag8hefo9k3esm1yiion7odalt7v5rehwl79b2fzq4m9aixhjm3abfvrmu011a12w7tio3uagghrzzrafuj1gx4tx1swu978l5kffd2hpj7h7lumongs39u0nqtprpo7yoyti1tadw99a8szrr1bmcdwl9em1fwsbpxsbfr8ksnzu7tk1dh18toahbcovarajmte6wj0ortz == \u\2\t\z\n\k\q\6\t\c\l\k\j\i\s\v\t\b\a\u\6\4\x\2\y\f\7\8\l\2\g\o\0\e\l\j\o\x\x\q\0\f\d\a\y\9\n\h\d\7\u\1\4\p\q\x\g\v\m\j\h\x\c\c\t\v\1\i\c\b\j\0\b\e\6\r\z\e\v\p\p\b\m\x\8\c\q\2\w\l\h\e\a\2\g\f\8\u\o\9\v\i\q\2\x\1\j\w\r\j\f\h\3\0\g\d\6\b\v\1\j\m\k\i\f\q\l\z\c\f\b\g\w\h\i\3\0\x\3\g\0\l\7\1\5\0\g\3\8\u\c\8\z\m\4\h\d\9\d\4\4\4\u\4\j\w\n\r\o\g\8\e\2\w\4\t\d\s\r\o\b\d\9\m\l\6\7\n\r\c\f\v\u\t\g\e\v\y\2\8\e\i\z\p\i\o\1\h\7\5\b\h\7\8\r\h\s\q\c\i\o\6\3\6\i\y\i\p\s\6\v\v\e\1\d\a\s\4\2\g\3\3\k\g\4\d\o\g\6\l\m\w\f\8\c\z\g\r\w\s\c\2\9\j\y\v\h\m\k\2\q\u\4\d\p\4\u\2\n\r\2\j\1\3\q\8\5\s\q\5\a\t\z\o\i\7\a\p\z\n\v\k\p\7\c\q\8\p\1\y\k\y\n\y\p\3\7\f\m\5\e\i\u\n\x\w\b\9\7\3\b\h\b\4\8\v\n\v\p\h\h\s\b\3\9\k\p\q\f\l\v\e\c\1\9\m\m\4\b\o\z\4\u\3\c\b\7\v\v\j\y\k\f\r\o\y\r\m\c\u\n\m\y\g\d\8\b\u\7\h\1\j\2\z\b\z\n\3\j\t\i\m\b\n\d\2\f\o\o\e\q\h\0\5\x\0\f\5\v\8\q\6\6\i\2\u\a\5\8\e\6\c\m\t\g\4\7\9\o\9\q\y\h\j\4\m\6\0\2\w\o\a\2\8\0\j\t\5\c\f\1\s\7\0\s\5\i\0\j\t\c\b\p\c\v\r\i\r\l\0\y\4\8\j\4\a\e\3\2\p\7\q\5\7\n\b\s\1\n\p\p\2\z\j\7\w\7\3\1\f\g\y\o\1\f\n\o\u\0\5\p\y\d\i\j\i\i\k\8\5\y\n\3\z\u\z\z\r\k\v\u\w\p\1\0\5\4\l\e\0\y\6\t\6\g\p\1\m\t\w\2\9\c\c\t\a\0\o\h\l\i\7\h\b\6\p\o\c\l\c\i\b\7\k\k\5\1\5\e\f\c\1\t\g\f\y\n\a\s\3\d\3\p\z\m\5\o\l\x\z\s\i\h\z\d\g\f\m\u\t\h\e\x\h\0\8\h\u\x\l\y\j\e\f\i\4\f\6\c\u\m\e\w\2\m\0\s\q\z\n\a\b\g\g\i\l\a\k\l\l\c\e\q\o\n\8\a\y\4\k\k\x\7\e\i\b\i\t\t\2\m\f\i\u\k\8\4\3\8\n\s\t\u\b\s\w\g\7\e\c\a\j\4\5\x\n\9\f\7\u\s\0\b\w\o\7\m\a\u\t\l\d\t\3\d\l\v\x\n\c\c\t\t\g\y\5\r\o\s\k\4\y\l\f\r\o\q\n\2\a\e\i\n\0\y\0\h\f\i\1\q\4\j\1\8\q\u\8\k\8\z\5\l\n\n\n\d\c\k\q\6\f\x\g\w\u\i\c\b\8\o\w\r\g\j\q\g\r\o\8\q\4\u\q\u\j\c\g\7\v\t\v\m\z\z\9\m\h\9\g\i\3\i\1\8\s\i\q\m\y\p\r\e\2\q\t\i\4\z\0\x\s\3\w\5\j\6\4\f\f\4\1\f\s\3\e\6\5\w\h\q\5\1\9\g\e\p\t\m\z\y\s\t\x\5\5\u\x\k\z\r\e\6\k\p\0\v\9\7\l\d\1\b\d\n\v\x\1\o\2\q\u\e\k\y\h\u\7\9\d\4\y\7\0\0\r\w\m\n\t\6\y\j\z\k\4\2\q\a\r\2\9\4\d\6\4\o\4\v\e\4\6\r\g\e\b\4\4\f\r\n\4\z\3\f\b\8\i\3\o\k\p\v\l\x\7\l\e\p\1\u\5\j\n\k\8\t\l\r\k\w\o\v\0\s\v\z\x\5\9\q\6\i\l\e\a\b\8\a\d\5\s\t\j\s\0\w\c\b\n\m\5\6\c\o\q\7\0\6\r\p\r\e\8\d\0\r\5\l\u\7\o\q\5\k\o\t\l\n\b\q\f\l\i\t\9\4\n\s\6\2\8\e\7\e\g\9\s\f\g\x\z\y\c\0\h\o\9\h\q\n\x\z\3\u\r\m\o\7\z\u\g\w\g\k\p\p\c\r\1\n\w\5\n\7\y\5\j\f\i\q\9\5\9\h\d\w\6\3\l\1\9\l\6\g\k\j\3\q\b\y\y\5\9\p\t\y\r\a\n\3\y\s\4\3\u\z\4\w\l\s\7\f\0\n\7\w\j\3\o\l\m\j\s\f\z\7\d\e\l\q\i\k\0\h\k\5\y\z\t\s\8\s\8\m\v\r\w\d\6\4\6\t\v\a\6\q\2\9\d\v\i\z\2\f\d\c\f\l\y\t\w\4\y\r\9\y\h\7\t\9\v\5\z\x\0\3\c\l\b\9\9\a\j\i\q\r\l\c\v\a\e\i\g\b\j\f\m\8\0\c\9\e\q\4\f\n\q\1\y\1\c\9\v\w\c\o\5\6\5\7\m\c\m\k\f\9\v\v\n\3\m\y\2\g\2\z\a\b\8\m\c\j\r\i\6\d\4\9\f\7\o\x\j\c\r\e\a\k\m\8\s\i\j\g\f\9\4\d\2\j\r\2\9\g\v\r\d\m\5\9\2\7\g\h\l\q\x\z\9\d\p\u\h\b\n\t\5\e\y\x\v\f\x\a\4\s\8\8\u\q\x\z\p\v\z\m\7\q\z\w\8\u\2\8\r\j\7\t\q\e\k\2\5\1\v\k\r\1\s\1\0\g\p\4\i\1\y\m\7\q\z\p\m\q\w\t\u\1\s\1\b\6\z\8\c\8\r\i\f\l\9\1\n\g\o\5\o\4\y\q\l\l\p\b\l\l\4\j\v\k\d\5\0\1\b\l\f\n\s\r\q\u\i\8\9\0\l\7\q\i\1\u\c\z\3\d\0\d\f\i\x\l\z\p\e\1\r\a\u\5\s\y\v\u\z\u\w\4\c\7\b\q\r\4\7\t\3\t\p\o\a\g\s\5\9\h\i\w\h\s\e\h\f\v\q\r\6\c\v\v\q\0\v\6\x\y\r\h\a\a\5\1\j\0\y\i\c\h\3\a\j\r\h\2\f\k\i\j\5\b\h\t\2\k\q\r\g\i\h\0\4\2\c\1\0\t\3\z\t\s\m\9\a\u\w\b\l\w\y\4\8\k\0\w\h\b\2\6\e\o\c\o\x\h\y\x\i\2\j\e\u\p\m\b\r\a\c\z\k\c\p\t\k\g\x\u\t\l\9\x\q\g\n\g\8\6\2\8\u\4\8\i\s\c\n\u\k\m\0\v\n\3\p\v\7\0\6\u\k\d\c\m\l\9\g\b\u\j\a\f\6\v\e\d\1\n\6\l\u\c\x\f\n\d\p\i\d\i\u\f\b\n\6\z\s\8\3\k\o\s\x\c\u\s\2\5\1\r\7\3\a\t\n\6\k\u\x\s\r\h\y\k\o\o\a\m\7\b\g\k\2\k\a\1\5\x\c\3\g\d\w\x\t\a\c\q\m\6\e\z\z\w\6\s\u\m\4\t\a\c\i\h\v\t\l\b\5\7\e\n\n\v\v\r\s\m\y\e\8\q\e\x\c\3\s\f\e\i\5\d\i\2\x\v\7\p\v\q\u\9\4\7\8\g\c\j\m\a\a\o\t\s\5\v\v\2\j\h\u\8\l\u\g\l\3\2\l\l\c\l\4\h\m\c\t\a\0\f\p\f\h\z\e\v\q\4\4\6\z\8\o\l\5\g\7\j\5\j\s\f\8\b\s\h\8\q\h\d\l\g\1\w\7\d\h\m\j\h\9\0\u\o\o\6\d\7\n\9\n\c\d\t\1\7\t\q\h\1\o\e\x\c\c\c\p\8\f\s\e\v\8\t\7\v\h\e\7\j\q\m\6\8\l\o\v\l\w\8\b\s\3\4\u\m\8\y\l\5\a\m\3\f\k\d\e\e\d\1\w\u\k\n\j\p\h\v\s\6\y\d\u\n\f\f\m\m\q\b\5\j\6\r\n\q\n\2\4\l\u\r\4\r\m\o\p\b\k\f\v\r\s\s\f\3\w\c\z\4\b\8\d\7\x\t\b\v\2\7\n\o\n\q\6\s\y\4\6\8\b\p\c\l\s\a\b\h\w\p\j\0\p\7\f\r\r\5\8\b\0\1\h\y\p\i\g\l\a\z\1\p\e\w\v\3\1\z\x\p\g\d\u\8\p\9\8\z\n\1\t\p\c\y\j\5\7\8\k\l\b\1\o\q\k\c\s\r\r\g\s\b\e\j\j\9\4\9\t\j\r\e\r\g\r\q\0\m\9\y\l\o\s\s\k\c\k\e\n\4\s\z\i\1\y\l\8\a\q\t\o\2\f\q\6\z\7\1\j\6\7\g\b\p\a\i\c\7\o\9\6\f\n\6\z\t\w\b\k\0\n\x\1\m\g\4\0\l\c\l\7\1\a\f\1\s\b\0\y\n\f\w\c\1\d\p\d\2\n\d\0\p\e\j\5\j\0\f\5\m\z\4\5\7\6\l\3\e\a\p\o\s\7\c\9\y\c\f\m\3\c\i\7\q\3\4\x\k\2\4\k\d\7\4\o\g\7\b\y\3\p\7\f\u\7\r\l\6\b\a\0\r\u\t\0\k\o\8\8\e\i\h\5\l\r\h\9\8\b\2\4\m\z\c\3\n\h\8\c\q\d\s\r\7\g\l\b\u\n\z\3\t\7\h\c\q\a\7\c\l\g\d\i\k\q\2\h\p\3\b\v\3\f\9\1\b\d\0\1\d\n\j\e\f\k\8\p\a\n\l\p\h\c\7\2\4\t\7\a\3\c\6\s\b\f\1\c\0\z\g\2\e\j\8\6\k\2\p\z\7\2\t\2\f\r\k\y\z\w\q\s\c\f\k\a\4\u\d\o\w\a\i\c\i\7\6\z\7\d\i\x\0\s\r\5\k\0\t\d\h\l\7\4\i\1\r\1\2\i\t\0\r\b\6\n\t\i\y\c\6\b\v\u\3\b\1\u\7\h\1\f\9\4\f\t\n\s\k\2\l\i\y\a\a\f\r\h\r\f\c\y\k\4\7\6\x\i\4\0\6\k\p\n\x\k\o\4\1\h\c\p\n\2\l\o\k\v\h\d\o\w\7\m\d\u\w\d\f\o\c\v\2\5\l\5\w\n\a\r\q\a\w\r\y\z\x\l\x\k\7\3\8\l\e\8\b\3\8\r\5\1\v\t\d\a\1\4\3\l\s\l\q\s\d\9\s\d\k\o\c\s\b\n\j\t\z\h\l\q\c\w\6\q\t\5\l\e\j\f\r\q\f\e\1\1\z\a\6\w\s\6\3\f\7\g\y\1\n\p\f\c\q\2\7\c\g\e\w\c\n\q\3\l\n\6\v\j\r\6\a\i\7\x\c\8\c\t\0\w\3\7\0\q\h\m\v\m\g\3\9\6\h\q\t\4\b\g\q\i\9\7\r\d\7\p\l\t\f\e\u\y\i\5\2\g\2\g\m\k\v\d\7\o\t\u\w\v\z\q\5\8\j\q\r\c\1\v\y\o\s\f\p\r\b\j\i\t\q\l\f\q\q\a\z\t\p\u\o\w\s\7\9\k\0\x\x\c\u\p\n\b\s\d\g\3\1\o\z\x\8\6\w\6\q\c\f\s\g\j\z\6\l\w\j\b\7\6\e\c\v\4\p\r\x\u\n\x\3\e\8\e\7\s\v\i\p\3\q\j\u\4\v\a\h\8\c\q\i\c\q\0\e\4\q\l\0\b\3\r\4\3\n\t\k\p\c\h\o\9\i\2\e\e\i\a\5\t\q\z\n\3\2\9\x\i\g\6\x\7\f\4\9\g\5\y\k\n\y\0\y\3\q\w\t\x\t\1\x\m\i\v\p\1\a\8\1\8\x\4\k\e\1\w\b\p\4\b\o\c\0\x\r\6\9\4\h\u\h\d\u\3\h\l\6\u\k\d\q\t\u\c\1\g\p\2\l\h\q\b\m\b\8\t\r\c\p\1\6\s\u\w\n\p\k\k\6\3\g\g\a\3\1\w\t\c\l\u\j\4\p\u\a\2\n\u\m\k\i\r\w\f\6\d\a\w\u\u\f\4\a\l\l\5\d\5\i\p\a\2\1\t\y\t\9\g\v\2\y\h\x\3\u\a\i\x\o\7\r\h\6\n\c\y\m\t\s\j\0\b\l\q\i\0\d\q\x\f\u\p\q\8\x\e\s\c\1\q\7\6\7\6\l\e\o\r\e\w\d\o\3\0\8\2\j\0\2\6\2\j\i\r\6\x\z\g\l\9\q\v\r\t\g\y\y\d\t\4\i\c\l\e\5\2\a\f\p\g\1\x\u\e\x\d\0\o\f\4\6\u\l\c\8\6\d\m\i\l\b\p\2\o\r\9\8\l\0\r\4\m\e\p\k\j\i\s\4\u\b\z\v\a\u\q\d\f\7\d\i\r\w\l\q\q\8\k\8\c\d\x\u\b\8\q\4\o\5\o\r\4\f\3\n\z\i\5\2\5\b\1\8\h\n\g\e\d\x\5\i\m\w\a\e\u\e\7\7\2\z\f\h\5\n\t\4\0\i\k\7\w\o\e\y\x\d\6\d\f\c\3\i\7\w\m\w\d\5\a\e\a\w\y\j\d\1\y\8\s\e\1\2\n\n\m\d\b\p\v\v\e\c\s\c\n\p\h\8\8\b\j\j\w\2\q\1\q\n\h\n\8\j\z\d\u\d\j\s\s\d\y\o\8\4\4\4\s\c\k\9\l\p\8\i\a\k\e\p\g\1\5\t\8\2\i\f\p\k\u\i\w\b\t\e\k\u\g\x\q\1\g\l\z\d\7\0\e\j\3\o\n\s\a\n\2\e\p\d\u\a\c\n\0\8\m\r\b\2\x\z\j\a\u\n\9\z\0\n\m\h\c\x\9\7\8\p\c\5\k\0\v\p\s\u\u\q\v\5\h\o\n\4\k\4\h\g\v\a\z\7\o\f\i\9\d\a\x\8\s\w\c\l\d\o\m\3\p\h\x\s\d\x\3\f\v\o\e\9\5\s\u\n\o\7\c\7\y\v\7\f\4\t\o\i\u\t\2\f\n\1\g\c\q\u\m\y\l\a\v\p\v\r\u\q\s\u\p\i\0\t\q\r\f\w\i\v\j\s\3\8\g\a\j\d\x\h\v\s\l\x\7\y\z\2\d\g\t\6\g\3\6\h\o\0\5\t\n\r\w\b\j\9\i\2\v\g\q\k\t\7\d\j\2\c\u\z\j\c\4\x\v\q\9\j\l\x\a\0\r\n\8\b\n\o\r\6\y\g\7\b\l\h\w\y\h\h\n\y\h\0\5\d\n\p\h\v\u\q\1\5\z\v\p\z\9\a\i\1\n\b\p\m\w\m\b\e\g\1\0\f\a\r\f\z\u\l\8\g\a\7\g\e\u\s\l\4\u\4\z\4\t\r\l\o\7\s\0\p\m\k\s\i\s\k\4\r\r\f\t\0\8\o\v\m\x\y\n\5\l\n\n\c\2\8\f\c\y\7\0\b\j\c\5\f\4\0\b\e\8\3\u\0\e\c\r\2\v\g\d\k\v\o\u\k\3\0\b\b\b\f\h\5\j\e\3\7\0\s\3\q\l\1\d\d\3\u\5\c\q\r\s\m\k\b\t\p\z\p\d\j\0\z\e\b\u\f\w\8\2\4\h\3\f\l\6\u\x\6\y\b\q\h\d\u\y\k\9\4\4\d\s\d\k\2\q\5\t\n\x\o\t\7\e\k\u\j\5\9\6\g\5\1\m\5\8\3\c\c\m\p\5\j\g\a\9\1\j\s\u\w\7\k\p\9\6\i\l\r\w\z\a\e\w\r\w\h\d\m\7\l\l\p\z\z\j\6\g\q\d\s\h\x\e\3\7\t\l\v\3\5\5\g\x\b\s\s\u\0\x\8\6\m\h\y\9\o\q\g\b\k\c\n\i\b\u\r\p\w\0\t\6\1\b\q\z\q\y\s\f\g\l\v\v\n\7\c\x\w\p\d\i\w\m\q\o\5\g\z\d\c\1\q\w\3\r\7\i\j\x\o\q\c\u\l\e\5\a\2\6\a\r\9\f\g\x\r\4\c\k\o\y\j\i\a\c\m\8\z\d\s\i\f\m\h\t\0\y\e\z\4\w\x\o\h\j\1\m\r\v\u\p\5\j\t\6\d\g\s\a\1\v\0\1\3\o\w\2\s\k\z\r\d\3\7\5\l\t\e\w\3\1\f\s\m\w\a\v\6\i\1\k\a\t\x\0\q\p\7\6\c\0\r\z\8\p\f\a\g\8\h\e\f\o\9\k\3\e\s\m\1\y\i\i\o\n\7\o\d\a\l\t\7\v\5\r\e\h\w\l\7\9\b\2\f\z\q\4\m\9\a\i\x\h\j\m\3\a\b\f\v\r\m\u\0\1\1\a\1\2\w\7\t\i\o\3\u\a\g\g\h\r\z\z\r\a\f\u\j\1\g\x\4\t\x\1\s\w\u\9\7\8\l\5\k\f\f\d\2\h\p\j\7\h\7\l\u\m\o\n\g\s\3\9\u\0\n\q\t\p\r\p\o\7\y\o\y\t\i\1\t\a\d\w\9\9\a\8\s\z\r\r\1\b\m\c\d\w\l\9\e\m\1\f\w\s\b\p\x\s\b\f\r\8\k\s\n\z\u\7\t\k\1\d\h\1\8\t\o\a\h\b\c\o\v\a\r\a\j\m\t\e\6\w\j\0\o\r\t\z ]] 00:08:35.519 00:08:35.519 real 0m3.584s 00:08:35.519 user 0m3.043s 00:08:35.519 sys 0m1.823s 00:08:35.519 11:11:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.519 11:11:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:35.519 11:11:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:35.520 11:11:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:35.520 11:11:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:35.520 11:11:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:35.520 11:11:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:35.520 11:11:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:35.520 11:11:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:35.520 11:11:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:35.520 11:11:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:35.520 11:11:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:35.520 11:11:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:35.778 { 00:08:35.778 "subsystems": [ 00:08:35.778 { 00:08:35.778 "subsystem": "bdev", 00:08:35.778 "config": [ 00:08:35.778 { 00:08:35.778 "params": { 00:08:35.778 "trtype": "pcie", 00:08:35.778 "traddr": "0000:00:10.0", 00:08:35.778 "name": "Nvme0" 00:08:35.778 }, 00:08:35.778 "method": "bdev_nvme_attach_controller" 00:08:35.778 }, 00:08:35.778 { 00:08:35.778 "method": "bdev_wait_for_examine" 00:08:35.778 } 00:08:35.778 ] 00:08:35.778 } 00:08:35.778 ] 00:08:35.778 } 00:08:35.778 [2024-12-10 11:11:42.419234] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:35.778 [2024-12-10 11:11:42.419411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61702 ] 00:08:35.778 [2024-12-10 11:11:42.594370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.037 [2024-12-10 11:11:42.733444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.295 [2024-12-10 11:11:42.980443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.554  [2024-12-10T11:11:44.314Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:37.488 00:08:37.488 11:11:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:37.488 ************************************ 00:08:37.488 END TEST spdk_dd_basic_rw 00:08:37.488 ************************************ 00:08:37.488 00:08:37.488 real 0m42.953s 00:08:37.488 user 0m36.220s 00:08:37.488 sys 0m19.342s 00:08:37.488 11:11:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.488 11:11:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:37.488 11:11:44 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:37.488 11:11:44 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.488 11:11:44 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.488 11:11:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:37.488 ************************************ 00:08:37.488 START TEST spdk_dd_posix 00:08:37.488 ************************************ 00:08:37.488 11:11:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:37.488 * Looking for test storage... 00:08:37.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:37.488 11:11:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:37.488 11:11:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:08:37.488 11:11:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:37.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.748 --rc genhtml_branch_coverage=1 00:08:37.748 --rc genhtml_function_coverage=1 00:08:37.748 --rc genhtml_legend=1 00:08:37.748 --rc geninfo_all_blocks=1 00:08:37.748 --rc geninfo_unexecuted_blocks=1 00:08:37.748 00:08:37.748 ' 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:37.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.748 --rc genhtml_branch_coverage=1 00:08:37.748 --rc genhtml_function_coverage=1 00:08:37.748 --rc genhtml_legend=1 00:08:37.748 --rc geninfo_all_blocks=1 00:08:37.748 --rc geninfo_unexecuted_blocks=1 00:08:37.748 00:08:37.748 ' 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:37.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.748 --rc genhtml_branch_coverage=1 00:08:37.748 --rc genhtml_function_coverage=1 00:08:37.748 --rc genhtml_legend=1 00:08:37.748 --rc geninfo_all_blocks=1 00:08:37.748 --rc geninfo_unexecuted_blocks=1 00:08:37.748 00:08:37.748 ' 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:37.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.748 --rc genhtml_branch_coverage=1 00:08:37.748 --rc genhtml_function_coverage=1 00:08:37.748 --rc genhtml_legend=1 00:08:37.748 --rc geninfo_all_blocks=1 00:08:37.748 --rc geninfo_unexecuted_blocks=1 00:08:37.748 00:08:37.748 ' 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:37.748 * First test run, liburing in use 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:37.748 ************************************ 00:08:37.748 START TEST dd_flag_append 00:08:37.748 ************************************ 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=99ruprzul4ujtb89pwf9vm2b7tiphe3x 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=35cnb8qo8m5sz5pfmq09k1n6cgcx4qcp 00:08:37.748 11:11:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 99ruprzul4ujtb89pwf9vm2b7tiphe3x 00:08:37.749 11:11:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 35cnb8qo8m5sz5pfmq09k1n6cgcx4qcp 00:08:37.749 11:11:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:37.749 [2024-12-10 11:11:44.505112] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:37.749 [2024-12-10 11:11:44.506076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61792 ] 00:08:38.007 [2024-12-10 11:11:44.679580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.007 [2024-12-10 11:11:44.786766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.266 [2024-12-10 11:11:44.978933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.266  [2024-12-10T11:11:46.467Z] Copying: 32/32 [B] (average 31 kBps) 00:08:39.641 00:08:39.641 ************************************ 00:08:39.641 END TEST dd_flag_append 00:08:39.641 ************************************ 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 35cnb8qo8m5sz5pfmq09k1n6cgcx4qcp99ruprzul4ujtb89pwf9vm2b7tiphe3x == \3\5\c\n\b\8\q\o\8\m\5\s\z\5\p\f\m\q\0\9\k\1\n\6\c\g\c\x\4\q\c\p\9\9\r\u\p\r\z\u\l\4\u\j\t\b\8\9\p\w\f\9\v\m\2\b\7\t\i\p\h\e\3\x ]] 00:08:39.641 00:08:39.641 real 0m1.844s 00:08:39.641 user 0m1.532s 00:08:39.641 sys 0m1.033s 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:39.641 ************************************ 00:08:39.641 START TEST dd_flag_directory 00:08:39.641 ************************************ 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:39.641 11:11:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:39.641 [2024-12-10 11:11:46.393432] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:39.641 [2024-12-10 11:11:46.394377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61832 ] 00:08:39.899 [2024-12-10 11:11:46.566448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.899 [2024-12-10 11:11:46.708711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.157 [2024-12-10 11:11:46.904260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.416 [2024-12-10 11:11:47.020486] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:40.416 [2024-12-10 11:11:47.020563] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:40.416 [2024-12-10 11:11:47.020592] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.983 [2024-12-10 11:11:47.753917] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:41.241 11:11:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:41.500 [2024-12-10 11:11:48.172177] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:41.500 [2024-12-10 11:11:48.172551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61854 ] 00:08:41.758 [2024-12-10 11:11:48.352198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.758 [2024-12-10 11:11:48.455676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.017 [2024-12-10 11:11:48.643131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.017 [2024-12-10 11:11:48.749441] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:42.017 [2024-12-10 11:11:48.749503] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:42.017 [2024-12-10 11:11:48.749531] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:42.975 [2024-12-10 11:11:49.480113] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:42.975 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:42.975 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:42.975 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:42.975 ************************************ 00:08:42.975 END TEST dd_flag_directory 00:08:42.975 ************************************ 00:08:42.975 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:42.975 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:42.975 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:42.975 00:08:42.975 real 0m3.450s 00:08:42.975 user 0m2.806s 00:08:42.975 sys 0m0.415s 00:08:42.975 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.975 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:42.975 11:11:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:42.975 11:11:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.975 11:11:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.975 11:11:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:43.241 ************************************ 00:08:43.241 START TEST dd_flag_nofollow 00:08:43.241 ************************************ 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:43.241 11:11:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:43.241 [2024-12-10 11:11:49.912618] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:43.241 [2024-12-10 11:11:49.913086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61900 ] 00:08:43.500 [2024-12-10 11:11:50.097102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.500 [2024-12-10 11:11:50.214967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.758 [2024-12-10 11:11:50.402309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.758 [2024-12-10 11:11:50.515493] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:43.758 [2024-12-10 11:11:50.515861] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:43.758 [2024-12-10 11:11:50.515905] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.693 [2024-12-10 11:11:51.251882] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:44.951 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:44.951 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:44.951 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:44.951 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:44.951 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:44.951 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:44.951 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:44.951 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:44.951 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:44.951 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.951 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.951 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.951 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.952 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.952 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.952 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.952 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:44.952 11:11:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:44.952 [2024-12-10 11:11:51.636518] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:44.952 [2024-12-10 11:11:51.636946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61927 ] 00:08:45.210 [2024-12-10 11:11:51.828528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.210 [2024-12-10 11:11:51.963745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.469 [2024-12-10 11:11:52.179222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.728 [2024-12-10 11:11:52.304764] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:45.728 [2024-12-10 11:11:52.305077] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:45.728 [2024-12-10 11:11:52.305114] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:46.295 [2024-12-10 11:11:53.031330] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:46.554 11:11:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:46.554 11:11:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:46.554 11:11:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:46.554 11:11:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:46.554 11:11:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:46.554 11:11:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:46.554 11:11:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:46.554 11:11:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:46.554 11:11:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:46.554 11:11:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:46.813 [2024-12-10 11:11:53.402467] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:46.813 [2024-12-10 11:11:53.402839] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61941 ] 00:08:46.813 [2024-12-10 11:11:53.575133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.071 [2024-12-10 11:11:53.681020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.071 [2024-12-10 11:11:53.860164] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.331  [2024-12-10T11:11:55.092Z] Copying: 512/512 [B] (average 500 kBps) 00:08:48.266 00:08:48.266 ************************************ 00:08:48.266 END TEST dd_flag_nofollow 00:08:48.266 ************************************ 00:08:48.266 11:11:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 1ylsbzofryxeyaurfu2jou35bunz2elfxjbz8bf3ye9n27mcdh1eb93u6k39ikyr2a5bvcb89a2aajvs6vwfrd35ji8j33i2pwoad8ckyypi4yzus9qz216l2x3zrdwkopltjbj2k57zc3icgkp0r6olmqrhagjxc0qbar5354t1gmtq3lvu4yf7vpfxrvfjesxtkcc1xwny53d3lwclych5yd6o8zfd17gquljm3iwlrhbc204qoyfr5tk9sun50ko42thco74f17hhls9qprvimnggocwscv66fpsumuphn9xavyshxm8y0av3zbt3nl3rc5xy1iemzdksutfbepxasnlui74155ldjfs1vtyisv7cvgkrxqafjem08k8i4nkgnr6lcucqv49gscaarajtlxhh7n5p3me7imevodqpc7de3s9yht22pyq8zv8f5mhegbb5fhiug1pepprbwmvlm4j2k1w4kz2nzh5n2dkpy74k17a5s0vv8smakbak == \1\y\l\s\b\z\o\f\r\y\x\e\y\a\u\r\f\u\2\j\o\u\3\5\b\u\n\z\2\e\l\f\x\j\b\z\8\b\f\3\y\e\9\n\2\7\m\c\d\h\1\e\b\9\3\u\6\k\3\9\i\k\y\r\2\a\5\b\v\c\b\8\9\a\2\a\a\j\v\s\6\v\w\f\r\d\3\5\j\i\8\j\3\3\i\2\p\w\o\a\d\8\c\k\y\y\p\i\4\y\z\u\s\9\q\z\2\1\6\l\2\x\3\z\r\d\w\k\o\p\l\t\j\b\j\2\k\5\7\z\c\3\i\c\g\k\p\0\r\6\o\l\m\q\r\h\a\g\j\x\c\0\q\b\a\r\5\3\5\4\t\1\g\m\t\q\3\l\v\u\4\y\f\7\v\p\f\x\r\v\f\j\e\s\x\t\k\c\c\1\x\w\n\y\5\3\d\3\l\w\c\l\y\c\h\5\y\d\6\o\8\z\f\d\1\7\g\q\u\l\j\m\3\i\w\l\r\h\b\c\2\0\4\q\o\y\f\r\5\t\k\9\s\u\n\5\0\k\o\4\2\t\h\c\o\7\4\f\1\7\h\h\l\s\9\q\p\r\v\i\m\n\g\g\o\c\w\s\c\v\6\6\f\p\s\u\m\u\p\h\n\9\x\a\v\y\s\h\x\m\8\y\0\a\v\3\z\b\t\3\n\l\3\r\c\5\x\y\1\i\e\m\z\d\k\s\u\t\f\b\e\p\x\a\s\n\l\u\i\7\4\1\5\5\l\d\j\f\s\1\v\t\y\i\s\v\7\c\v\g\k\r\x\q\a\f\j\e\m\0\8\k\8\i\4\n\k\g\n\r\6\l\c\u\c\q\v\4\9\g\s\c\a\a\r\a\j\t\l\x\h\h\7\n\5\p\3\m\e\7\i\m\e\v\o\d\q\p\c\7\d\e\3\s\9\y\h\t\2\2\p\y\q\8\z\v\8\f\5\m\h\e\g\b\b\5\f\h\i\u\g\1\p\e\p\p\r\b\w\m\v\l\m\4\j\2\k\1\w\4\k\z\2\n\z\h\5\n\2\d\k\p\y\7\4\k\1\7\a\5\s\0\v\v\8\s\m\a\k\b\a\k ]] 00:08:48.266 00:08:48.266 real 0m5.177s 00:08:48.266 user 0m4.247s 00:08:48.266 sys 0m1.314s 00:08:48.266 11:11:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.266 11:11:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:48.266 11:11:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:48.266 11:11:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.266 11:11:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.266 11:11:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:48.266 ************************************ 00:08:48.266 START TEST dd_flag_noatime 00:08:48.266 ************************************ 00:08:48.266 11:11:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:08:48.266 11:11:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:48.266 11:11:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:48.266 11:11:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:48.266 11:11:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:48.266 11:11:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:48.266 11:11:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.266 11:11:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733829113 00:08:48.266 11:11:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:48.266 11:11:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733829114 00:08:48.266 11:11:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:49.269 11:11:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:49.525 [2024-12-10 11:11:56.125969] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:49.525 [2024-12-10 11:11:56.126131] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62001 ] 00:08:49.525 [2024-12-10 11:11:56.294661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.783 [2024-12-10 11:11:56.398181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.783 [2024-12-10 11:11:56.580639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.041  [2024-12-10T11:11:57.801Z] Copying: 512/512 [B] (average 500 kBps) 00:08:50.975 00:08:50.975 11:11:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:50.975 11:11:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733829113 )) 00:08:50.975 11:11:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:50.975 11:11:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733829114 )) 00:08:50.975 11:11:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:51.233 [2024-12-10 11:11:57.887510] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:51.233 [2024-12-10 11:11:57.887663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62027 ] 00:08:51.491 [2024-12-10 11:11:58.073078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.491 [2024-12-10 11:11:58.181166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.749 [2024-12-10 11:11:58.366749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.749  [2024-12-10T11:11:59.510Z] Copying: 512/512 [B] (average 500 kBps) 00:08:52.684 00:08:52.684 11:11:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:52.684 11:11:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733829118 )) 00:08:52.684 00:08:52.684 real 0m4.488s 00:08:52.684 user 0m2.838s 00:08:52.684 sys 0m1.950s 00:08:52.684 11:11:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.684 ************************************ 00:08:52.684 END TEST dd_flag_noatime 00:08:52.684 ************************************ 00:08:52.684 11:11:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:52.970 11:11:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:52.970 11:11:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.970 11:11:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.970 11:11:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:52.970 ************************************ 00:08:52.970 START TEST dd_flags_misc 00:08:52.970 ************************************ 00:08:52.970 11:11:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:08:52.970 11:11:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:52.970 11:11:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:52.970 11:11:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:52.970 11:11:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:52.970 11:11:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:52.970 11:11:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:52.970 11:11:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:52.970 11:11:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:52.970 11:11:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:52.970 [2024-12-10 11:11:59.658200] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:52.970 [2024-12-10 11:11:59.658361] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62067 ] 00:08:53.255 [2024-12-10 11:11:59.833169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.255 [2024-12-10 11:12:00.004859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.513 [2024-12-10 11:12:00.227947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.771  [2024-12-10T11:12:01.532Z] Copying: 512/512 [B] (average 500 kBps) 00:08:54.706 00:08:54.706 11:12:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e4nev0ll1io5i06rxr7o26bhgat0mi214jqkarlp5vid8bfodb0p4zsq7wkem6le1s0xkh9c3yurvhvc94d7ftfbvqzy860cy20d83ugfh2dn1bm5l5d5poo417hvry7kkc07830p2lei6c16u15mk7l7ys7ntpo2fzac8stkg10x9db8nb68znyo8ivvpf32aklh2zqmfnfc4s2vqbmg8vlujgp3qn6snucxdlfslgtrkak656ayewpil5v7viyplmmdx60k4bamy2c6mmteswdwrly2hz02yfarccfe0f09qxqve5vktw1yiayf3rme7oim2bno9j99g4odjx14srmirm2v1y0x213vxmkm6zds46da6jftdb5k3s53wzxtyqg4al7dl35l3nw3uaqxt8h78m5kcjmgy91rehsi7likb7ydn0q30ag9wm3ijbi6c981g4qt8z71s0rpk0bxubihtvvoygjcr08jfgm160l04h9t91mzogdge8gy75o == \e\4\n\e\v\0\l\l\1\i\o\5\i\0\6\r\x\r\7\o\2\6\b\h\g\a\t\0\m\i\2\1\4\j\q\k\a\r\l\p\5\v\i\d\8\b\f\o\d\b\0\p\4\z\s\q\7\w\k\e\m\6\l\e\1\s\0\x\k\h\9\c\3\y\u\r\v\h\v\c\9\4\d\7\f\t\f\b\v\q\z\y\8\6\0\c\y\2\0\d\8\3\u\g\f\h\2\d\n\1\b\m\5\l\5\d\5\p\o\o\4\1\7\h\v\r\y\7\k\k\c\0\7\8\3\0\p\2\l\e\i\6\c\1\6\u\1\5\m\k\7\l\7\y\s\7\n\t\p\o\2\f\z\a\c\8\s\t\k\g\1\0\x\9\d\b\8\n\b\6\8\z\n\y\o\8\i\v\v\p\f\3\2\a\k\l\h\2\z\q\m\f\n\f\c\4\s\2\v\q\b\m\g\8\v\l\u\j\g\p\3\q\n\6\s\n\u\c\x\d\l\f\s\l\g\t\r\k\a\k\6\5\6\a\y\e\w\p\i\l\5\v\7\v\i\y\p\l\m\m\d\x\6\0\k\4\b\a\m\y\2\c\6\m\m\t\e\s\w\d\w\r\l\y\2\h\z\0\2\y\f\a\r\c\c\f\e\0\f\0\9\q\x\q\v\e\5\v\k\t\w\1\y\i\a\y\f\3\r\m\e\7\o\i\m\2\b\n\o\9\j\9\9\g\4\o\d\j\x\1\4\s\r\m\i\r\m\2\v\1\y\0\x\2\1\3\v\x\m\k\m\6\z\d\s\4\6\d\a\6\j\f\t\d\b\5\k\3\s\5\3\w\z\x\t\y\q\g\4\a\l\7\d\l\3\5\l\3\n\w\3\u\a\q\x\t\8\h\7\8\m\5\k\c\j\m\g\y\9\1\r\e\h\s\i\7\l\i\k\b\7\y\d\n\0\q\3\0\a\g\9\w\m\3\i\j\b\i\6\c\9\8\1\g\4\q\t\8\z\7\1\s\0\r\p\k\0\b\x\u\b\i\h\t\v\v\o\y\g\j\c\r\0\8\j\f\g\m\1\6\0\l\0\4\h\9\t\9\1\m\z\o\g\d\g\e\8\g\y\7\5\o ]] 00:08:54.706 11:12:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:54.706 11:12:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:54.706 [2024-12-10 11:12:01.491179] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:54.706 [2024-12-10 11:12:01.491389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62094 ] 00:08:54.964 [2024-12-10 11:12:01.673598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.964 [2024-12-10 11:12:01.779411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.222 [2024-12-10 11:12:01.960995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.481  [2024-12-10T11:12:03.242Z] Copying: 512/512 [B] (average 500 kBps) 00:08:56.416 00:08:56.416 11:12:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e4nev0ll1io5i06rxr7o26bhgat0mi214jqkarlp5vid8bfodb0p4zsq7wkem6le1s0xkh9c3yurvhvc94d7ftfbvqzy860cy20d83ugfh2dn1bm5l5d5poo417hvry7kkc07830p2lei6c16u15mk7l7ys7ntpo2fzac8stkg10x9db8nb68znyo8ivvpf32aklh2zqmfnfc4s2vqbmg8vlujgp3qn6snucxdlfslgtrkak656ayewpil5v7viyplmmdx60k4bamy2c6mmteswdwrly2hz02yfarccfe0f09qxqve5vktw1yiayf3rme7oim2bno9j99g4odjx14srmirm2v1y0x213vxmkm6zds46da6jftdb5k3s53wzxtyqg4al7dl35l3nw3uaqxt8h78m5kcjmgy91rehsi7likb7ydn0q30ag9wm3ijbi6c981g4qt8z71s0rpk0bxubihtvvoygjcr08jfgm160l04h9t91mzogdge8gy75o == \e\4\n\e\v\0\l\l\1\i\o\5\i\0\6\r\x\r\7\o\2\6\b\h\g\a\t\0\m\i\2\1\4\j\q\k\a\r\l\p\5\v\i\d\8\b\f\o\d\b\0\p\4\z\s\q\7\w\k\e\m\6\l\e\1\s\0\x\k\h\9\c\3\y\u\r\v\h\v\c\9\4\d\7\f\t\f\b\v\q\z\y\8\6\0\c\y\2\0\d\8\3\u\g\f\h\2\d\n\1\b\m\5\l\5\d\5\p\o\o\4\1\7\h\v\r\y\7\k\k\c\0\7\8\3\0\p\2\l\e\i\6\c\1\6\u\1\5\m\k\7\l\7\y\s\7\n\t\p\o\2\f\z\a\c\8\s\t\k\g\1\0\x\9\d\b\8\n\b\6\8\z\n\y\o\8\i\v\v\p\f\3\2\a\k\l\h\2\z\q\m\f\n\f\c\4\s\2\v\q\b\m\g\8\v\l\u\j\g\p\3\q\n\6\s\n\u\c\x\d\l\f\s\l\g\t\r\k\a\k\6\5\6\a\y\e\w\p\i\l\5\v\7\v\i\y\p\l\m\m\d\x\6\0\k\4\b\a\m\y\2\c\6\m\m\t\e\s\w\d\w\r\l\y\2\h\z\0\2\y\f\a\r\c\c\f\e\0\f\0\9\q\x\q\v\e\5\v\k\t\w\1\y\i\a\y\f\3\r\m\e\7\o\i\m\2\b\n\o\9\j\9\9\g\4\o\d\j\x\1\4\s\r\m\i\r\m\2\v\1\y\0\x\2\1\3\v\x\m\k\m\6\z\d\s\4\6\d\a\6\j\f\t\d\b\5\k\3\s\5\3\w\z\x\t\y\q\g\4\a\l\7\d\l\3\5\l\3\n\w\3\u\a\q\x\t\8\h\7\8\m\5\k\c\j\m\g\y\9\1\r\e\h\s\i\7\l\i\k\b\7\y\d\n\0\q\3\0\a\g\9\w\m\3\i\j\b\i\6\c\9\8\1\g\4\q\t\8\z\7\1\s\0\r\p\k\0\b\x\u\b\i\h\t\v\v\o\y\g\j\c\r\0\8\j\f\g\m\1\6\0\l\0\4\h\9\t\9\1\m\z\o\g\d\g\e\8\g\y\7\5\o ]] 00:08:56.416 11:12:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:56.416 11:12:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:56.416 [2024-12-10 11:12:03.175872] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:56.416 [2024-12-10 11:12:03.176018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62116 ] 00:08:56.675 [2024-12-10 11:12:03.353027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.675 [2024-12-10 11:12:03.484500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.933 [2024-12-10 11:12:03.711326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.191  [2024-12-10T11:12:04.952Z] Copying: 512/512 [B] (average 500 kBps) 00:08:58.126 00:08:58.126 11:12:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e4nev0ll1io5i06rxr7o26bhgat0mi214jqkarlp5vid8bfodb0p4zsq7wkem6le1s0xkh9c3yurvhvc94d7ftfbvqzy860cy20d83ugfh2dn1bm5l5d5poo417hvry7kkc07830p2lei6c16u15mk7l7ys7ntpo2fzac8stkg10x9db8nb68znyo8ivvpf32aklh2zqmfnfc4s2vqbmg8vlujgp3qn6snucxdlfslgtrkak656ayewpil5v7viyplmmdx60k4bamy2c6mmteswdwrly2hz02yfarccfe0f09qxqve5vktw1yiayf3rme7oim2bno9j99g4odjx14srmirm2v1y0x213vxmkm6zds46da6jftdb5k3s53wzxtyqg4al7dl35l3nw3uaqxt8h78m5kcjmgy91rehsi7likb7ydn0q30ag9wm3ijbi6c981g4qt8z71s0rpk0bxubihtvvoygjcr08jfgm160l04h9t91mzogdge8gy75o == \e\4\n\e\v\0\l\l\1\i\o\5\i\0\6\r\x\r\7\o\2\6\b\h\g\a\t\0\m\i\2\1\4\j\q\k\a\r\l\p\5\v\i\d\8\b\f\o\d\b\0\p\4\z\s\q\7\w\k\e\m\6\l\e\1\s\0\x\k\h\9\c\3\y\u\r\v\h\v\c\9\4\d\7\f\t\f\b\v\q\z\y\8\6\0\c\y\2\0\d\8\3\u\g\f\h\2\d\n\1\b\m\5\l\5\d\5\p\o\o\4\1\7\h\v\r\y\7\k\k\c\0\7\8\3\0\p\2\l\e\i\6\c\1\6\u\1\5\m\k\7\l\7\y\s\7\n\t\p\o\2\f\z\a\c\8\s\t\k\g\1\0\x\9\d\b\8\n\b\6\8\z\n\y\o\8\i\v\v\p\f\3\2\a\k\l\h\2\z\q\m\f\n\f\c\4\s\2\v\q\b\m\g\8\v\l\u\j\g\p\3\q\n\6\s\n\u\c\x\d\l\f\s\l\g\t\r\k\a\k\6\5\6\a\y\e\w\p\i\l\5\v\7\v\i\y\p\l\m\m\d\x\6\0\k\4\b\a\m\y\2\c\6\m\m\t\e\s\w\d\w\r\l\y\2\h\z\0\2\y\f\a\r\c\c\f\e\0\f\0\9\q\x\q\v\e\5\v\k\t\w\1\y\i\a\y\f\3\r\m\e\7\o\i\m\2\b\n\o\9\j\9\9\g\4\o\d\j\x\1\4\s\r\m\i\r\m\2\v\1\y\0\x\2\1\3\v\x\m\k\m\6\z\d\s\4\6\d\a\6\j\f\t\d\b\5\k\3\s\5\3\w\z\x\t\y\q\g\4\a\l\7\d\l\3\5\l\3\n\w\3\u\a\q\x\t\8\h\7\8\m\5\k\c\j\m\g\y\9\1\r\e\h\s\i\7\l\i\k\b\7\y\d\n\0\q\3\0\a\g\9\w\m\3\i\j\b\i\6\c\9\8\1\g\4\q\t\8\z\7\1\s\0\r\p\k\0\b\x\u\b\i\h\t\v\v\o\y\g\j\c\r\0\8\j\f\g\m\1\6\0\l\0\4\h\9\t\9\1\m\z\o\g\d\g\e\8\g\y\7\5\o ]] 00:08:58.126 11:12:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:58.126 11:12:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:58.384 [2024-12-10 11:12:04.984026] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:08:58.385 [2024-12-10 11:12:04.984277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62143 ] 00:08:58.385 [2024-12-10 11:12:05.177895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.642 [2024-12-10 11:12:05.284973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.899 [2024-12-10 11:12:05.480731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.899  [2024-12-10T11:12:06.691Z] Copying: 512/512 [B] (average 500 kBps) 00:08:59.865 00:08:59.866 11:12:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e4nev0ll1io5i06rxr7o26bhgat0mi214jqkarlp5vid8bfodb0p4zsq7wkem6le1s0xkh9c3yurvhvc94d7ftfbvqzy860cy20d83ugfh2dn1bm5l5d5poo417hvry7kkc07830p2lei6c16u15mk7l7ys7ntpo2fzac8stkg10x9db8nb68znyo8ivvpf32aklh2zqmfnfc4s2vqbmg8vlujgp3qn6snucxdlfslgtrkak656ayewpil5v7viyplmmdx60k4bamy2c6mmteswdwrly2hz02yfarccfe0f09qxqve5vktw1yiayf3rme7oim2bno9j99g4odjx14srmirm2v1y0x213vxmkm6zds46da6jftdb5k3s53wzxtyqg4al7dl35l3nw3uaqxt8h78m5kcjmgy91rehsi7likb7ydn0q30ag9wm3ijbi6c981g4qt8z71s0rpk0bxubihtvvoygjcr08jfgm160l04h9t91mzogdge8gy75o == \e\4\n\e\v\0\l\l\1\i\o\5\i\0\6\r\x\r\7\o\2\6\b\h\g\a\t\0\m\i\2\1\4\j\q\k\a\r\l\p\5\v\i\d\8\b\f\o\d\b\0\p\4\z\s\q\7\w\k\e\m\6\l\e\1\s\0\x\k\h\9\c\3\y\u\r\v\h\v\c\9\4\d\7\f\t\f\b\v\q\z\y\8\6\0\c\y\2\0\d\8\3\u\g\f\h\2\d\n\1\b\m\5\l\5\d\5\p\o\o\4\1\7\h\v\r\y\7\k\k\c\0\7\8\3\0\p\2\l\e\i\6\c\1\6\u\1\5\m\k\7\l\7\y\s\7\n\t\p\o\2\f\z\a\c\8\s\t\k\g\1\0\x\9\d\b\8\n\b\6\8\z\n\y\o\8\i\v\v\p\f\3\2\a\k\l\h\2\z\q\m\f\n\f\c\4\s\2\v\q\b\m\g\8\v\l\u\j\g\p\3\q\n\6\s\n\u\c\x\d\l\f\s\l\g\t\r\k\a\k\6\5\6\a\y\e\w\p\i\l\5\v\7\v\i\y\p\l\m\m\d\x\6\0\k\4\b\a\m\y\2\c\6\m\m\t\e\s\w\d\w\r\l\y\2\h\z\0\2\y\f\a\r\c\c\f\e\0\f\0\9\q\x\q\v\e\5\v\k\t\w\1\y\i\a\y\f\3\r\m\e\7\o\i\m\2\b\n\o\9\j\9\9\g\4\o\d\j\x\1\4\s\r\m\i\r\m\2\v\1\y\0\x\2\1\3\v\x\m\k\m\6\z\d\s\4\6\d\a\6\j\f\t\d\b\5\k\3\s\5\3\w\z\x\t\y\q\g\4\a\l\7\d\l\3\5\l\3\n\w\3\u\a\q\x\t\8\h\7\8\m\5\k\c\j\m\g\y\9\1\r\e\h\s\i\7\l\i\k\b\7\y\d\n\0\q\3\0\a\g\9\w\m\3\i\j\b\i\6\c\9\8\1\g\4\q\t\8\z\7\1\s\0\r\p\k\0\b\x\u\b\i\h\t\v\v\o\y\g\j\c\r\0\8\j\f\g\m\1\6\0\l\0\4\h\9\t\9\1\m\z\o\g\d\g\e\8\g\y\7\5\o ]] 00:08:59.866 11:12:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:59.866 11:12:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:59.866 11:12:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:59.866 11:12:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:59.866 11:12:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:59.866 11:12:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:00.124 [2024-12-10 11:12:06.811771] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:00.124 [2024-12-10 11:12:06.812002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62164 ] 00:09:00.382 [2024-12-10 11:12:06.999652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.382 [2024-12-10 11:12:07.109255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.640 [2024-12-10 11:12:07.300021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:00.640  [2024-12-10T11:12:08.840Z] Copying: 512/512 [B] (average 500 kBps) 00:09:02.014 00:09:02.014 11:12:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 782qjcosi9k36ybomhkighjo49x8yxahdn2pp8fcu290x5w8kgfdv4acvohnxqd4cnrucspftx0dzvfw44xlqs0i6xiazd17pqnmc9wsi0amiu3zv0njbk2baicjf11hyqqd2knrzoei77hv83hdbhzo1t2uwq3g4jwsfop8weloxb41h57uapxk3trhzglfmtcnlgptqzczxtrwleus5esxtv3r7oull56x3aci14vqyzszwccgovjc2x3bjnd2jtc8z14jtekvcouhy2xlthoekjytr27qp9o3tvih7kmafigxmf6yksyrfdv0g3e1i7j873bpgjaozc5jhu0drpkmh3a3m5mb665iiwhtkl3s2gda9odiy3iytwdzob5i3exqzjh2c7t385ec2v2p5w65xgiysyl3pk7t58pee511e52208tzncv8etydzfdb6hncs054zbsykmz5zglxa76nfm6obtob0rnae63jpf7mm2it3xk54z5fozloeftn == \7\8\2\q\j\c\o\s\i\9\k\3\6\y\b\o\m\h\k\i\g\h\j\o\4\9\x\8\y\x\a\h\d\n\2\p\p\8\f\c\u\2\9\0\x\5\w\8\k\g\f\d\v\4\a\c\v\o\h\n\x\q\d\4\c\n\r\u\c\s\p\f\t\x\0\d\z\v\f\w\4\4\x\l\q\s\0\i\6\x\i\a\z\d\1\7\p\q\n\m\c\9\w\s\i\0\a\m\i\u\3\z\v\0\n\j\b\k\2\b\a\i\c\j\f\1\1\h\y\q\q\d\2\k\n\r\z\o\e\i\7\7\h\v\8\3\h\d\b\h\z\o\1\t\2\u\w\q\3\g\4\j\w\s\f\o\p\8\w\e\l\o\x\b\4\1\h\5\7\u\a\p\x\k\3\t\r\h\z\g\l\f\m\t\c\n\l\g\p\t\q\z\c\z\x\t\r\w\l\e\u\s\5\e\s\x\t\v\3\r\7\o\u\l\l\5\6\x\3\a\c\i\1\4\v\q\y\z\s\z\w\c\c\g\o\v\j\c\2\x\3\b\j\n\d\2\j\t\c\8\z\1\4\j\t\e\k\v\c\o\u\h\y\2\x\l\t\h\o\e\k\j\y\t\r\2\7\q\p\9\o\3\t\v\i\h\7\k\m\a\f\i\g\x\m\f\6\y\k\s\y\r\f\d\v\0\g\3\e\1\i\7\j\8\7\3\b\p\g\j\a\o\z\c\5\j\h\u\0\d\r\p\k\m\h\3\a\3\m\5\m\b\6\6\5\i\i\w\h\t\k\l\3\s\2\g\d\a\9\o\d\i\y\3\i\y\t\w\d\z\o\b\5\i\3\e\x\q\z\j\h\2\c\7\t\3\8\5\e\c\2\v\2\p\5\w\6\5\x\g\i\y\s\y\l\3\p\k\7\t\5\8\p\e\e\5\1\1\e\5\2\2\0\8\t\z\n\c\v\8\e\t\y\d\z\f\d\b\6\h\n\c\s\0\5\4\z\b\s\y\k\m\z\5\z\g\l\x\a\7\6\n\f\m\6\o\b\t\o\b\0\r\n\a\e\6\3\j\p\f\7\m\m\2\i\t\3\x\k\5\4\z\5\f\o\z\l\o\e\f\t\n ]] 00:09:02.014 11:12:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:02.014 11:12:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:02.014 [2024-12-10 11:12:08.586844] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:02.014 [2024-12-10 11:12:08.587074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62186 ] 00:09:02.014 [2024-12-10 11:12:08.772739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.272 [2024-12-10 11:12:08.943797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.530 [2024-12-10 11:12:09.188255] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:02.530  [2024-12-10T11:12:10.729Z] Copying: 512/512 [B] (average 500 kBps) 00:09:03.903 00:09:03.903 11:12:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 782qjcosi9k36ybomhkighjo49x8yxahdn2pp8fcu290x5w8kgfdv4acvohnxqd4cnrucspftx0dzvfw44xlqs0i6xiazd17pqnmc9wsi0amiu3zv0njbk2baicjf11hyqqd2knrzoei77hv83hdbhzo1t2uwq3g4jwsfop8weloxb41h57uapxk3trhzglfmtcnlgptqzczxtrwleus5esxtv3r7oull56x3aci14vqyzszwccgovjc2x3bjnd2jtc8z14jtekvcouhy2xlthoekjytr27qp9o3tvih7kmafigxmf6yksyrfdv0g3e1i7j873bpgjaozc5jhu0drpkmh3a3m5mb665iiwhtkl3s2gda9odiy3iytwdzob5i3exqzjh2c7t385ec2v2p5w65xgiysyl3pk7t58pee511e52208tzncv8etydzfdb6hncs054zbsykmz5zglxa76nfm6obtob0rnae63jpf7mm2it3xk54z5fozloeftn == \7\8\2\q\j\c\o\s\i\9\k\3\6\y\b\o\m\h\k\i\g\h\j\o\4\9\x\8\y\x\a\h\d\n\2\p\p\8\f\c\u\2\9\0\x\5\w\8\k\g\f\d\v\4\a\c\v\o\h\n\x\q\d\4\c\n\r\u\c\s\p\f\t\x\0\d\z\v\f\w\4\4\x\l\q\s\0\i\6\x\i\a\z\d\1\7\p\q\n\m\c\9\w\s\i\0\a\m\i\u\3\z\v\0\n\j\b\k\2\b\a\i\c\j\f\1\1\h\y\q\q\d\2\k\n\r\z\o\e\i\7\7\h\v\8\3\h\d\b\h\z\o\1\t\2\u\w\q\3\g\4\j\w\s\f\o\p\8\w\e\l\o\x\b\4\1\h\5\7\u\a\p\x\k\3\t\r\h\z\g\l\f\m\t\c\n\l\g\p\t\q\z\c\z\x\t\r\w\l\e\u\s\5\e\s\x\t\v\3\r\7\o\u\l\l\5\6\x\3\a\c\i\1\4\v\q\y\z\s\z\w\c\c\g\o\v\j\c\2\x\3\b\j\n\d\2\j\t\c\8\z\1\4\j\t\e\k\v\c\o\u\h\y\2\x\l\t\h\o\e\k\j\y\t\r\2\7\q\p\9\o\3\t\v\i\h\7\k\m\a\f\i\g\x\m\f\6\y\k\s\y\r\f\d\v\0\g\3\e\1\i\7\j\8\7\3\b\p\g\j\a\o\z\c\5\j\h\u\0\d\r\p\k\m\h\3\a\3\m\5\m\b\6\6\5\i\i\w\h\t\k\l\3\s\2\g\d\a\9\o\d\i\y\3\i\y\t\w\d\z\o\b\5\i\3\e\x\q\z\j\h\2\c\7\t\3\8\5\e\c\2\v\2\p\5\w\6\5\x\g\i\y\s\y\l\3\p\k\7\t\5\8\p\e\e\5\1\1\e\5\2\2\0\8\t\z\n\c\v\8\e\t\y\d\z\f\d\b\6\h\n\c\s\0\5\4\z\b\s\y\k\m\z\5\z\g\l\x\a\7\6\n\f\m\6\o\b\t\o\b\0\r\n\a\e\6\3\j\p\f\7\m\m\2\i\t\3\x\k\5\4\z\5\f\o\z\l\o\e\f\t\n ]] 00:09:03.903 11:12:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:03.903 11:12:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:03.903 [2024-12-10 11:12:10.509143] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:03.903 [2024-12-10 11:12:10.509293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62213 ] 00:09:03.903 [2024-12-10 11:12:10.681983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.161 [2024-12-10 11:12:10.846065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.420 [2024-12-10 11:12:11.037885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.420  [2024-12-10T11:12:12.658Z] Copying: 512/512 [B] (average 250 kBps) 00:09:05.832 00:09:05.832 11:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 782qjcosi9k36ybomhkighjo49x8yxahdn2pp8fcu290x5w8kgfdv4acvohnxqd4cnrucspftx0dzvfw44xlqs0i6xiazd17pqnmc9wsi0amiu3zv0njbk2baicjf11hyqqd2knrzoei77hv83hdbhzo1t2uwq3g4jwsfop8weloxb41h57uapxk3trhzglfmtcnlgptqzczxtrwleus5esxtv3r7oull56x3aci14vqyzszwccgovjc2x3bjnd2jtc8z14jtekvcouhy2xlthoekjytr27qp9o3tvih7kmafigxmf6yksyrfdv0g3e1i7j873bpgjaozc5jhu0drpkmh3a3m5mb665iiwhtkl3s2gda9odiy3iytwdzob5i3exqzjh2c7t385ec2v2p5w65xgiysyl3pk7t58pee511e52208tzncv8etydzfdb6hncs054zbsykmz5zglxa76nfm6obtob0rnae63jpf7mm2it3xk54z5fozloeftn == \7\8\2\q\j\c\o\s\i\9\k\3\6\y\b\o\m\h\k\i\g\h\j\o\4\9\x\8\y\x\a\h\d\n\2\p\p\8\f\c\u\2\9\0\x\5\w\8\k\g\f\d\v\4\a\c\v\o\h\n\x\q\d\4\c\n\r\u\c\s\p\f\t\x\0\d\z\v\f\w\4\4\x\l\q\s\0\i\6\x\i\a\z\d\1\7\p\q\n\m\c\9\w\s\i\0\a\m\i\u\3\z\v\0\n\j\b\k\2\b\a\i\c\j\f\1\1\h\y\q\q\d\2\k\n\r\z\o\e\i\7\7\h\v\8\3\h\d\b\h\z\o\1\t\2\u\w\q\3\g\4\j\w\s\f\o\p\8\w\e\l\o\x\b\4\1\h\5\7\u\a\p\x\k\3\t\r\h\z\g\l\f\m\t\c\n\l\g\p\t\q\z\c\z\x\t\r\w\l\e\u\s\5\e\s\x\t\v\3\r\7\o\u\l\l\5\6\x\3\a\c\i\1\4\v\q\y\z\s\z\w\c\c\g\o\v\j\c\2\x\3\b\j\n\d\2\j\t\c\8\z\1\4\j\t\e\k\v\c\o\u\h\y\2\x\l\t\h\o\e\k\j\y\t\r\2\7\q\p\9\o\3\t\v\i\h\7\k\m\a\f\i\g\x\m\f\6\y\k\s\y\r\f\d\v\0\g\3\e\1\i\7\j\8\7\3\b\p\g\j\a\o\z\c\5\j\h\u\0\d\r\p\k\m\h\3\a\3\m\5\m\b\6\6\5\i\i\w\h\t\k\l\3\s\2\g\d\a\9\o\d\i\y\3\i\y\t\w\d\z\o\b\5\i\3\e\x\q\z\j\h\2\c\7\t\3\8\5\e\c\2\v\2\p\5\w\6\5\x\g\i\y\s\y\l\3\p\k\7\t\5\8\p\e\e\5\1\1\e\5\2\2\0\8\t\z\n\c\v\8\e\t\y\d\z\f\d\b\6\h\n\c\s\0\5\4\z\b\s\y\k\m\z\5\z\g\l\x\a\7\6\n\f\m\6\o\b\t\o\b\0\r\n\a\e\6\3\j\p\f\7\m\m\2\i\t\3\x\k\5\4\z\5\f\o\z\l\o\e\f\t\n ]] 00:09:05.832 11:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:05.832 11:12:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:05.832 [2024-12-10 11:12:12.480205] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:05.832 [2024-12-10 11:12:12.480886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62240 ] 00:09:06.090 [2024-12-10 11:12:12.677750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.090 [2024-12-10 11:12:12.864269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.348 [2024-12-10 11:12:13.120281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:06.605  [2024-12-10T11:12:14.804Z] Copying: 512/512 [B] (average 166 kBps) 00:09:07.978 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 782qjcosi9k36ybomhkighjo49x8yxahdn2pp8fcu290x5w8kgfdv4acvohnxqd4cnrucspftx0dzvfw44xlqs0i6xiazd17pqnmc9wsi0amiu3zv0njbk2baicjf11hyqqd2knrzoei77hv83hdbhzo1t2uwq3g4jwsfop8weloxb41h57uapxk3trhzglfmtcnlgptqzczxtrwleus5esxtv3r7oull56x3aci14vqyzszwccgovjc2x3bjnd2jtc8z14jtekvcouhy2xlthoekjytr27qp9o3tvih7kmafigxmf6yksyrfdv0g3e1i7j873bpgjaozc5jhu0drpkmh3a3m5mb665iiwhtkl3s2gda9odiy3iytwdzob5i3exqzjh2c7t385ec2v2p5w65xgiysyl3pk7t58pee511e52208tzncv8etydzfdb6hncs054zbsykmz5zglxa76nfm6obtob0rnae63jpf7mm2it3xk54z5fozloeftn == \7\8\2\q\j\c\o\s\i\9\k\3\6\y\b\o\m\h\k\i\g\h\j\o\4\9\x\8\y\x\a\h\d\n\2\p\p\8\f\c\u\2\9\0\x\5\w\8\k\g\f\d\v\4\a\c\v\o\h\n\x\q\d\4\c\n\r\u\c\s\p\f\t\x\0\d\z\v\f\w\4\4\x\l\q\s\0\i\6\x\i\a\z\d\1\7\p\q\n\m\c\9\w\s\i\0\a\m\i\u\3\z\v\0\n\j\b\k\2\b\a\i\c\j\f\1\1\h\y\q\q\d\2\k\n\r\z\o\e\i\7\7\h\v\8\3\h\d\b\h\z\o\1\t\2\u\w\q\3\g\4\j\w\s\f\o\p\8\w\e\l\o\x\b\4\1\h\5\7\u\a\p\x\k\3\t\r\h\z\g\l\f\m\t\c\n\l\g\p\t\q\z\c\z\x\t\r\w\l\e\u\s\5\e\s\x\t\v\3\r\7\o\u\l\l\5\6\x\3\a\c\i\1\4\v\q\y\z\s\z\w\c\c\g\o\v\j\c\2\x\3\b\j\n\d\2\j\t\c\8\z\1\4\j\t\e\k\v\c\o\u\h\y\2\x\l\t\h\o\e\k\j\y\t\r\2\7\q\p\9\o\3\t\v\i\h\7\k\m\a\f\i\g\x\m\f\6\y\k\s\y\r\f\d\v\0\g\3\e\1\i\7\j\8\7\3\b\p\g\j\a\o\z\c\5\j\h\u\0\d\r\p\k\m\h\3\a\3\m\5\m\b\6\6\5\i\i\w\h\t\k\l\3\s\2\g\d\a\9\o\d\i\y\3\i\y\t\w\d\z\o\b\5\i\3\e\x\q\z\j\h\2\c\7\t\3\8\5\e\c\2\v\2\p\5\w\6\5\x\g\i\y\s\y\l\3\p\k\7\t\5\8\p\e\e\5\1\1\e\5\2\2\0\8\t\z\n\c\v\8\e\t\y\d\z\f\d\b\6\h\n\c\s\0\5\4\z\b\s\y\k\m\z\5\z\g\l\x\a\7\6\n\f\m\6\o\b\t\o\b\0\r\n\a\e\6\3\j\p\f\7\m\m\2\i\t\3\x\k\5\4\z\5\f\o\z\l\o\e\f\t\n ]] 00:09:07.978 00:09:07.978 real 0m15.132s 00:09:07.978 user 0m12.513s 00:09:07.978 sys 0m8.180s 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.978 ************************************ 00:09:07.978 END TEST dd_flags_misc 00:09:07.978 ************************************ 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:09:07.978 * Second test run, disabling liburing, forcing AIO 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:07.978 ************************************ 00:09:07.978 START TEST dd_flag_append_forced_aio 00:09:07.978 ************************************ 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=qaafeuq1h3k41tbr1glfsb49epk115je 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=smlh4jrywvfrtdzajsuz0d6o9q8al7y4 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s qaafeuq1h3k41tbr1glfsb49epk115je 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s smlh4jrywvfrtdzajsuz0d6o9q8al7y4 00:09:07.978 11:12:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:09:08.235 [2024-12-10 11:12:14.883818] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:08.235 [2024-12-10 11:12:14.884365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62286 ] 00:09:08.493 [2024-12-10 11:12:15.077084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.493 [2024-12-10 11:12:15.237681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.750 [2024-12-10 11:12:15.420585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:08.750  [2024-12-10T11:12:16.950Z] Copying: 32/32 [B] (average 31 kBps) 00:09:10.124 00:09:10.124 ************************************ 00:09:10.124 END TEST dd_flag_append_forced_aio 00:09:10.124 ************************************ 00:09:10.124 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ smlh4jrywvfrtdzajsuz0d6o9q8al7y4qaafeuq1h3k41tbr1glfsb49epk115je == \s\m\l\h\4\j\r\y\w\v\f\r\t\d\z\a\j\s\u\z\0\d\6\o\9\q\8\a\l\7\y\4\q\a\a\f\e\u\q\1\h\3\k\4\1\t\b\r\1\g\l\f\s\b\4\9\e\p\k\1\1\5\j\e ]] 00:09:10.124 00:09:10.124 real 0m2.160s 00:09:10.124 user 0m1.784s 00:09:10.124 sys 0m0.239s 00:09:10.124 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.124 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:10.124 11:12:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:09:10.124 11:12:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.124 11:12:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.124 11:12:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:10.124 ************************************ 00:09:10.124 START TEST dd_flag_directory_forced_aio 00:09:10.124 ************************************ 00:09:10.124 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:09:10.124 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:10.124 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:09:10.125 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:10.125 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.125 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.125 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.125 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.125 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.125 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.125 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.125 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:10.125 11:12:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:10.383 [2024-12-10 11:12:17.041597] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:10.383 [2024-12-10 11:12:17.041792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62330 ] 00:09:10.641 [2024-12-10 11:12:17.229250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.641 [2024-12-10 11:12:17.413385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.898 [2024-12-10 11:12:17.607219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.898 [2024-12-10 11:12:17.714314] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:10.898 [2024-12-10 11:12:17.714444] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:10.898 [2024-12-10 11:12:17.714486] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:12.329 [2024-12-10 11:12:18.721866] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:12.329 11:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:12.587 [2024-12-10 11:12:19.276421] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:12.587 [2024-12-10 11:12:19.276967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62363 ] 00:09:12.844 [2024-12-10 11:12:19.468629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.844 [2024-12-10 11:12:19.655534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.102 [2024-12-10 11:12:19.854872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.359 [2024-12-10 11:12:20.000443] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:13.359 [2024-12-10 11:12:20.000778] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:13.359 [2024-12-10 11:12:20.000815] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:14.293 [2024-12-10 11:12:20.972272] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:14.551 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:09:14.551 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:14.551 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:09:14.551 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:09:14.551 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:09:14.551 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:14.551 00:09:14.551 real 0m4.407s 00:09:14.551 user 0m3.664s 00:09:14.551 sys 0m0.461s 00:09:14.551 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.551 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:14.551 ************************************ 00:09:14.551 END TEST dd_flag_directory_forced_aio 00:09:14.551 ************************************ 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:14.809 ************************************ 00:09:14.809 START TEST dd_flag_nofollow_forced_aio 00:09:14.809 ************************************ 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:14.809 11:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:14.809 [2024-12-10 11:12:21.524696] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:14.809 [2024-12-10 11:12:21.524923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62409 ] 00:09:15.067 [2024-12-10 11:12:21.716929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.325 [2024-12-10 11:12:21.904266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.583 [2024-12-10 11:12:22.155551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.583 [2024-12-10 11:12:22.294870] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:15.583 [2024-12-10 11:12:22.295286] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:15.583 [2024-12-10 11:12:22.295339] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:16.516 [2024-12-10 11:12:23.104556] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:16.774 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:09:16.774 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:16.774 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:09:16.775 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:09:16.775 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:09:16.775 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:16.775 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:16.775 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:09:16.775 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:16.775 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:16.775 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.775 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:16.775 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.775 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:16.775 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.775 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:16.775 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:16.775 11:12:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:16.775 [2024-12-10 11:12:23.499703] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:16.775 [2024-12-10 11:12:23.499894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62436 ] 00:09:17.033 [2024-12-10 11:12:23.674079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.033 [2024-12-10 11:12:23.780274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.291 [2024-12-10 11:12:23.968507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:17.291 [2024-12-10 11:12:24.078895] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:17.291 [2024-12-10 11:12:24.079251] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:17.291 [2024-12-10 11:12:24.079304] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:18.228 [2024-12-10 11:12:24.952118] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:18.793 11:12:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:09:18.793 11:12:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:18.793 11:12:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:09:18.793 11:12:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:09:18.793 11:12:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:09:18.793 11:12:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:18.793 11:12:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:09:18.793 11:12:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:18.793 11:12:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:18.793 11:12:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:18.793 [2024-12-10 11:12:25.425662] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:18.793 [2024-12-10 11:12:25.425877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62461 ] 00:09:18.793 [2024-12-10 11:12:25.594858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.051 [2024-12-10 11:12:25.757698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.309 [2024-12-10 11:12:26.017282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:19.567  [2024-12-10T11:12:27.770Z] Copying: 512/512 [B] (average 500 kBps) 00:09:20.944 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ vq47g8169odwec2t5up3it2e8d6dfuc1p3pegsdd2eyw5jmm472m0iqt5xx5jm2fb72t3uk6qoh7ggzt7o8v04wjtlnnnc6znpy3kbgma62dxrr6ph9br33xrjgowibtdtihdfg37zbnwosmpwxgtxmw15v0fwojtlgi6agxejuy0hwe5whupdd5fqmkbqpgkouqlh4qzphww3o43nlxdle23li8rcf3031rwjraafptrrykbrl1j3fbh2m833drgbv2u3u756hvs44fxyfzana4uwj1hs0tnq7adtypsk95e0o00waqzh2x4zs5pch30oybcx8qy2q2xqvsx3wxnchi0otns0ase992rfuu2h7lle06i3zgc83qkbx0dfi83n1qx8xsq5d9mpwjpdefthhjis0f2g8ys42z21gmtvnm6ujthal0azujzd12lr819m9jg27yb7k048irlbayee9v7e63rrhlv4hh5xeq02nghn1cx5t17zoal3ealofy == \v\q\4\7\g\8\1\6\9\o\d\w\e\c\2\t\5\u\p\3\i\t\2\e\8\d\6\d\f\u\c\1\p\3\p\e\g\s\d\d\2\e\y\w\5\j\m\m\4\7\2\m\0\i\q\t\5\x\x\5\j\m\2\f\b\7\2\t\3\u\k\6\q\o\h\7\g\g\z\t\7\o\8\v\0\4\w\j\t\l\n\n\n\c\6\z\n\p\y\3\k\b\g\m\a\6\2\d\x\r\r\6\p\h\9\b\r\3\3\x\r\j\g\o\w\i\b\t\d\t\i\h\d\f\g\3\7\z\b\n\w\o\s\m\p\w\x\g\t\x\m\w\1\5\v\0\f\w\o\j\t\l\g\i\6\a\g\x\e\j\u\y\0\h\w\e\5\w\h\u\p\d\d\5\f\q\m\k\b\q\p\g\k\o\u\q\l\h\4\q\z\p\h\w\w\3\o\4\3\n\l\x\d\l\e\2\3\l\i\8\r\c\f\3\0\3\1\r\w\j\r\a\a\f\p\t\r\r\y\k\b\r\l\1\j\3\f\b\h\2\m\8\3\3\d\r\g\b\v\2\u\3\u\7\5\6\h\v\s\4\4\f\x\y\f\z\a\n\a\4\u\w\j\1\h\s\0\t\n\q\7\a\d\t\y\p\s\k\9\5\e\0\o\0\0\w\a\q\z\h\2\x\4\z\s\5\p\c\h\3\0\o\y\b\c\x\8\q\y\2\q\2\x\q\v\s\x\3\w\x\n\c\h\i\0\o\t\n\s\0\a\s\e\9\9\2\r\f\u\u\2\h\7\l\l\e\0\6\i\3\z\g\c\8\3\q\k\b\x\0\d\f\i\8\3\n\1\q\x\8\x\s\q\5\d\9\m\p\w\j\p\d\e\f\t\h\h\j\i\s\0\f\2\g\8\y\s\4\2\z\2\1\g\m\t\v\n\m\6\u\j\t\h\a\l\0\a\z\u\j\z\d\1\2\l\r\8\1\9\m\9\j\g\2\7\y\b\7\k\0\4\8\i\r\l\b\a\y\e\e\9\v\7\e\6\3\r\r\h\l\v\4\h\h\5\x\e\q\0\2\n\g\h\n\1\c\x\5\t\1\7\z\o\a\l\3\e\a\l\o\f\y ]] 00:09:20.944 00:09:20.944 real 0m6.117s 00:09:20.944 user 0m5.074s 00:09:20.944 sys 0m0.665s 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.944 ************************************ 00:09:20.944 END TEST dd_flag_nofollow_forced_aio 00:09:20.944 ************************************ 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:20.944 ************************************ 00:09:20.944 START TEST dd_flag_noatime_forced_aio 00:09:20.944 ************************************ 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733829146 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733829147 00:09:20.944 11:12:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:09:21.879 11:12:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:22.137 [2024-12-10 11:12:28.709321] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:22.137 [2024-12-10 11:12:28.709865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62519 ] 00:09:22.137 [2024-12-10 11:12:28.901477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.396 [2024-12-10 11:12:29.062436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.654 [2024-12-10 11:12:29.245631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.654  [2024-12-10T11:12:30.414Z] Copying: 512/512 [B] (average 500 kBps) 00:09:23.588 00:09:23.845 11:12:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:23.845 11:12:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733829146 )) 00:09:23.845 11:12:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:23.845 11:12:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733829147 )) 00:09:23.845 11:12:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:23.845 [2024-12-10 11:12:30.576376] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:23.845 [2024-12-10 11:12:30.576961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62547 ] 00:09:24.103 [2024-12-10 11:12:30.772576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.360 [2024-12-10 11:12:30.958029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.361 [2024-12-10 11:12:31.139204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:24.618  [2024-12-10T11:12:32.819Z] Copying: 512/512 [B] (average 500 kBps) 00:09:25.993 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733829151 )) 00:09:25.993 00:09:25.993 real 0m4.865s 00:09:25.993 user 0m3.173s 00:09:25.993 sys 0m0.431s 00:09:25.993 ************************************ 00:09:25.993 END TEST dd_flag_noatime_forced_aio 00:09:25.993 ************************************ 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:25.993 ************************************ 00:09:25.993 START TEST dd_flags_misc_forced_aio 00:09:25.993 ************************************ 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:25.993 11:12:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:25.993 [2024-12-10 11:12:32.603429] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:25.993 [2024-12-10 11:12:32.603856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62587 ] 00:09:25.993 [2024-12-10 11:12:32.799099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.251 [2024-12-10 11:12:32.906530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.510 [2024-12-10 11:12:33.094662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:26.510  [2024-12-10T11:12:34.775Z] Copying: 512/512 [B] (average 500 kBps) 00:09:27.949 00:09:27.949 11:12:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xakpv3xcn54hh5wxr23r4pkwjfwzn4tmdlzanmrd0ok52u1431jgkaimxlv1xq5h9vzj8kyab6n7d8jly978yy9fa3ofcvflzk9jyrofavk1q7nuelh68v81s2abvg302e9lg7vptjrv75910x4lk0tmo95qu6ffuuda0cnbpyrf3bchwt0x6s5lpztm8kncfva0ccycgg44a9xlvuig3zyqdid59j8z7ujrj01l2khdi8jvo00xqz5zjm015vdev3vgonxprjh3i3m3seyks1db232zb6psejai407nijhg8cfdiucqgni19cl8w69lqhecbzp7kjfqhzkhy4vyc50ra951mok46f11nz4tx6fd0npfsem5l3l1elb3jj7xvm9p7ob12thdynee739l4llkc4bkpw9u6a7kjm8r3c5l7rdi9um4gza4as2js5v4glmkj4xqbnqz2zc9cfgf9o6hzb54cjkrtanxgjfr54e2fngg3smhztbkjt3n3zb4 == \x\a\k\p\v\3\x\c\n\5\4\h\h\5\w\x\r\2\3\r\4\p\k\w\j\f\w\z\n\4\t\m\d\l\z\a\n\m\r\d\0\o\k\5\2\u\1\4\3\1\j\g\k\a\i\m\x\l\v\1\x\q\5\h\9\v\z\j\8\k\y\a\b\6\n\7\d\8\j\l\y\9\7\8\y\y\9\f\a\3\o\f\c\v\f\l\z\k\9\j\y\r\o\f\a\v\k\1\q\7\n\u\e\l\h\6\8\v\8\1\s\2\a\b\v\g\3\0\2\e\9\l\g\7\v\p\t\j\r\v\7\5\9\1\0\x\4\l\k\0\t\m\o\9\5\q\u\6\f\f\u\u\d\a\0\c\n\b\p\y\r\f\3\b\c\h\w\t\0\x\6\s\5\l\p\z\t\m\8\k\n\c\f\v\a\0\c\c\y\c\g\g\4\4\a\9\x\l\v\u\i\g\3\z\y\q\d\i\d\5\9\j\8\z\7\u\j\r\j\0\1\l\2\k\h\d\i\8\j\v\o\0\0\x\q\z\5\z\j\m\0\1\5\v\d\e\v\3\v\g\o\n\x\p\r\j\h\3\i\3\m\3\s\e\y\k\s\1\d\b\2\3\2\z\b\6\p\s\e\j\a\i\4\0\7\n\i\j\h\g\8\c\f\d\i\u\c\q\g\n\i\1\9\c\l\8\w\6\9\l\q\h\e\c\b\z\p\7\k\j\f\q\h\z\k\h\y\4\v\y\c\5\0\r\a\9\5\1\m\o\k\4\6\f\1\1\n\z\4\t\x\6\f\d\0\n\p\f\s\e\m\5\l\3\l\1\e\l\b\3\j\j\7\x\v\m\9\p\7\o\b\1\2\t\h\d\y\n\e\e\7\3\9\l\4\l\l\k\c\4\b\k\p\w\9\u\6\a\7\k\j\m\8\r\3\c\5\l\7\r\d\i\9\u\m\4\g\z\a\4\a\s\2\j\s\5\v\4\g\l\m\k\j\4\x\q\b\n\q\z\2\z\c\9\c\f\g\f\9\o\6\h\z\b\5\4\c\j\k\r\t\a\n\x\g\j\f\r\5\4\e\2\f\n\g\g\3\s\m\h\z\t\b\k\j\t\3\n\3\z\b\4 ]] 00:09:27.949 11:12:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:27.949 11:12:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:27.949 [2024-12-10 11:12:34.499830] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:27.949 [2024-12-10 11:12:34.500001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62606 ] 00:09:27.949 [2024-12-10 11:12:34.681610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.207 [2024-12-10 11:12:34.866387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.465 [2024-12-10 11:12:35.088233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:28.465  [2024-12-10T11:12:36.666Z] Copying: 512/512 [B] (average 500 kBps) 00:09:29.840 00:09:29.840 11:12:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xakpv3xcn54hh5wxr23r4pkwjfwzn4tmdlzanmrd0ok52u1431jgkaimxlv1xq5h9vzj8kyab6n7d8jly978yy9fa3ofcvflzk9jyrofavk1q7nuelh68v81s2abvg302e9lg7vptjrv75910x4lk0tmo95qu6ffuuda0cnbpyrf3bchwt0x6s5lpztm8kncfva0ccycgg44a9xlvuig3zyqdid59j8z7ujrj01l2khdi8jvo00xqz5zjm015vdev3vgonxprjh3i3m3seyks1db232zb6psejai407nijhg8cfdiucqgni19cl8w69lqhecbzp7kjfqhzkhy4vyc50ra951mok46f11nz4tx6fd0npfsem5l3l1elb3jj7xvm9p7ob12thdynee739l4llkc4bkpw9u6a7kjm8r3c5l7rdi9um4gza4as2js5v4glmkj4xqbnqz2zc9cfgf9o6hzb54cjkrtanxgjfr54e2fngg3smhztbkjt3n3zb4 == \x\a\k\p\v\3\x\c\n\5\4\h\h\5\w\x\r\2\3\r\4\p\k\w\j\f\w\z\n\4\t\m\d\l\z\a\n\m\r\d\0\o\k\5\2\u\1\4\3\1\j\g\k\a\i\m\x\l\v\1\x\q\5\h\9\v\z\j\8\k\y\a\b\6\n\7\d\8\j\l\y\9\7\8\y\y\9\f\a\3\o\f\c\v\f\l\z\k\9\j\y\r\o\f\a\v\k\1\q\7\n\u\e\l\h\6\8\v\8\1\s\2\a\b\v\g\3\0\2\e\9\l\g\7\v\p\t\j\r\v\7\5\9\1\0\x\4\l\k\0\t\m\o\9\5\q\u\6\f\f\u\u\d\a\0\c\n\b\p\y\r\f\3\b\c\h\w\t\0\x\6\s\5\l\p\z\t\m\8\k\n\c\f\v\a\0\c\c\y\c\g\g\4\4\a\9\x\l\v\u\i\g\3\z\y\q\d\i\d\5\9\j\8\z\7\u\j\r\j\0\1\l\2\k\h\d\i\8\j\v\o\0\0\x\q\z\5\z\j\m\0\1\5\v\d\e\v\3\v\g\o\n\x\p\r\j\h\3\i\3\m\3\s\e\y\k\s\1\d\b\2\3\2\z\b\6\p\s\e\j\a\i\4\0\7\n\i\j\h\g\8\c\f\d\i\u\c\q\g\n\i\1\9\c\l\8\w\6\9\l\q\h\e\c\b\z\p\7\k\j\f\q\h\z\k\h\y\4\v\y\c\5\0\r\a\9\5\1\m\o\k\4\6\f\1\1\n\z\4\t\x\6\f\d\0\n\p\f\s\e\m\5\l\3\l\1\e\l\b\3\j\j\7\x\v\m\9\p\7\o\b\1\2\t\h\d\y\n\e\e\7\3\9\l\4\l\l\k\c\4\b\k\p\w\9\u\6\a\7\k\j\m\8\r\3\c\5\l\7\r\d\i\9\u\m\4\g\z\a\4\a\s\2\j\s\5\v\4\g\l\m\k\j\4\x\q\b\n\q\z\2\z\c\9\c\f\g\f\9\o\6\h\z\b\5\4\c\j\k\r\t\a\n\x\g\j\f\r\5\4\e\2\f\n\g\g\3\s\m\h\z\t\b\k\j\t\3\n\3\z\b\4 ]] 00:09:29.840 11:12:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:29.840 11:12:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:29.840 [2024-12-10 11:12:36.560502] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:29.840 [2024-12-10 11:12:36.560730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62631 ] 00:09:30.098 [2024-12-10 11:12:36.756789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.356 [2024-12-10 11:12:36.941403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.614 [2024-12-10 11:12:37.212723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.614  [2024-12-10T11:12:38.823Z] Copying: 512/512 [B] (average 166 kBps) 00:09:31.997 00:09:31.997 11:12:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xakpv3xcn54hh5wxr23r4pkwjfwzn4tmdlzanmrd0ok52u1431jgkaimxlv1xq5h9vzj8kyab6n7d8jly978yy9fa3ofcvflzk9jyrofavk1q7nuelh68v81s2abvg302e9lg7vptjrv75910x4lk0tmo95qu6ffuuda0cnbpyrf3bchwt0x6s5lpztm8kncfva0ccycgg44a9xlvuig3zyqdid59j8z7ujrj01l2khdi8jvo00xqz5zjm015vdev3vgonxprjh3i3m3seyks1db232zb6psejai407nijhg8cfdiucqgni19cl8w69lqhecbzp7kjfqhzkhy4vyc50ra951mok46f11nz4tx6fd0npfsem5l3l1elb3jj7xvm9p7ob12thdynee739l4llkc4bkpw9u6a7kjm8r3c5l7rdi9um4gza4as2js5v4glmkj4xqbnqz2zc9cfgf9o6hzb54cjkrtanxgjfr54e2fngg3smhztbkjt3n3zb4 == \x\a\k\p\v\3\x\c\n\5\4\h\h\5\w\x\r\2\3\r\4\p\k\w\j\f\w\z\n\4\t\m\d\l\z\a\n\m\r\d\0\o\k\5\2\u\1\4\3\1\j\g\k\a\i\m\x\l\v\1\x\q\5\h\9\v\z\j\8\k\y\a\b\6\n\7\d\8\j\l\y\9\7\8\y\y\9\f\a\3\o\f\c\v\f\l\z\k\9\j\y\r\o\f\a\v\k\1\q\7\n\u\e\l\h\6\8\v\8\1\s\2\a\b\v\g\3\0\2\e\9\l\g\7\v\p\t\j\r\v\7\5\9\1\0\x\4\l\k\0\t\m\o\9\5\q\u\6\f\f\u\u\d\a\0\c\n\b\p\y\r\f\3\b\c\h\w\t\0\x\6\s\5\l\p\z\t\m\8\k\n\c\f\v\a\0\c\c\y\c\g\g\4\4\a\9\x\l\v\u\i\g\3\z\y\q\d\i\d\5\9\j\8\z\7\u\j\r\j\0\1\l\2\k\h\d\i\8\j\v\o\0\0\x\q\z\5\z\j\m\0\1\5\v\d\e\v\3\v\g\o\n\x\p\r\j\h\3\i\3\m\3\s\e\y\k\s\1\d\b\2\3\2\z\b\6\p\s\e\j\a\i\4\0\7\n\i\j\h\g\8\c\f\d\i\u\c\q\g\n\i\1\9\c\l\8\w\6\9\l\q\h\e\c\b\z\p\7\k\j\f\q\h\z\k\h\y\4\v\y\c\5\0\r\a\9\5\1\m\o\k\4\6\f\1\1\n\z\4\t\x\6\f\d\0\n\p\f\s\e\m\5\l\3\l\1\e\l\b\3\j\j\7\x\v\m\9\p\7\o\b\1\2\t\h\d\y\n\e\e\7\3\9\l\4\l\l\k\c\4\b\k\p\w\9\u\6\a\7\k\j\m\8\r\3\c\5\l\7\r\d\i\9\u\m\4\g\z\a\4\a\s\2\j\s\5\v\4\g\l\m\k\j\4\x\q\b\n\q\z\2\z\c\9\c\f\g\f\9\o\6\h\z\b\5\4\c\j\k\r\t\a\n\x\g\j\f\r\5\4\e\2\f\n\g\g\3\s\m\h\z\t\b\k\j\t\3\n\3\z\b\4 ]] 00:09:31.997 11:12:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:31.997 11:12:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:32.255 [2024-12-10 11:12:38.880723] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:32.255 [2024-12-10 11:12:38.881295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62662 ] 00:09:32.514 [2024-12-10 11:12:39.095927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.514 [2024-12-10 11:12:39.296603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.772 [2024-12-10 11:12:39.563036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.031  [2024-12-10T11:12:41.234Z] Copying: 512/512 [B] (average 250 kBps) 00:09:34.408 00:09:34.408 11:12:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xakpv3xcn54hh5wxr23r4pkwjfwzn4tmdlzanmrd0ok52u1431jgkaimxlv1xq5h9vzj8kyab6n7d8jly978yy9fa3ofcvflzk9jyrofavk1q7nuelh68v81s2abvg302e9lg7vptjrv75910x4lk0tmo95qu6ffuuda0cnbpyrf3bchwt0x6s5lpztm8kncfva0ccycgg44a9xlvuig3zyqdid59j8z7ujrj01l2khdi8jvo00xqz5zjm015vdev3vgonxprjh3i3m3seyks1db232zb6psejai407nijhg8cfdiucqgni19cl8w69lqhecbzp7kjfqhzkhy4vyc50ra951mok46f11nz4tx6fd0npfsem5l3l1elb3jj7xvm9p7ob12thdynee739l4llkc4bkpw9u6a7kjm8r3c5l7rdi9um4gza4as2js5v4glmkj4xqbnqz2zc9cfgf9o6hzb54cjkrtanxgjfr54e2fngg3smhztbkjt3n3zb4 == \x\a\k\p\v\3\x\c\n\5\4\h\h\5\w\x\r\2\3\r\4\p\k\w\j\f\w\z\n\4\t\m\d\l\z\a\n\m\r\d\0\o\k\5\2\u\1\4\3\1\j\g\k\a\i\m\x\l\v\1\x\q\5\h\9\v\z\j\8\k\y\a\b\6\n\7\d\8\j\l\y\9\7\8\y\y\9\f\a\3\o\f\c\v\f\l\z\k\9\j\y\r\o\f\a\v\k\1\q\7\n\u\e\l\h\6\8\v\8\1\s\2\a\b\v\g\3\0\2\e\9\l\g\7\v\p\t\j\r\v\7\5\9\1\0\x\4\l\k\0\t\m\o\9\5\q\u\6\f\f\u\u\d\a\0\c\n\b\p\y\r\f\3\b\c\h\w\t\0\x\6\s\5\l\p\z\t\m\8\k\n\c\f\v\a\0\c\c\y\c\g\g\4\4\a\9\x\l\v\u\i\g\3\z\y\q\d\i\d\5\9\j\8\z\7\u\j\r\j\0\1\l\2\k\h\d\i\8\j\v\o\0\0\x\q\z\5\z\j\m\0\1\5\v\d\e\v\3\v\g\o\n\x\p\r\j\h\3\i\3\m\3\s\e\y\k\s\1\d\b\2\3\2\z\b\6\p\s\e\j\a\i\4\0\7\n\i\j\h\g\8\c\f\d\i\u\c\q\g\n\i\1\9\c\l\8\w\6\9\l\q\h\e\c\b\z\p\7\k\j\f\q\h\z\k\h\y\4\v\y\c\5\0\r\a\9\5\1\m\o\k\4\6\f\1\1\n\z\4\t\x\6\f\d\0\n\p\f\s\e\m\5\l\3\l\1\e\l\b\3\j\j\7\x\v\m\9\p\7\o\b\1\2\t\h\d\y\n\e\e\7\3\9\l\4\l\l\k\c\4\b\k\p\w\9\u\6\a\7\k\j\m\8\r\3\c\5\l\7\r\d\i\9\u\m\4\g\z\a\4\a\s\2\j\s\5\v\4\g\l\m\k\j\4\x\q\b\n\q\z\2\z\c\9\c\f\g\f\9\o\6\h\z\b\5\4\c\j\k\r\t\a\n\x\g\j\f\r\5\4\e\2\f\n\g\g\3\s\m\h\z\t\b\k\j\t\3\n\3\z\b\4 ]] 00:09:34.408 11:12:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:34.408 11:12:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:09:34.408 11:12:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:34.408 11:12:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:34.408 11:12:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:34.408 11:12:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:34.667 [2024-12-10 11:12:41.295864] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:34.667 [2024-12-10 11:12:41.296045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62687 ] 00:09:34.667 [2024-12-10 11:12:41.476112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.925 [2024-12-10 11:12:41.673178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.184 [2024-12-10 11:12:41.888927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.442  [2024-12-10T11:12:43.643Z] Copying: 512/512 [B] (average 500 kBps) 00:09:36.817 00:09:36.817 11:12:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yqrf7dvvaplzto1sfhje1f3vqbmgadxdb9y4j08qdfvgbvftvaqdktpkh30nk79hoywqk1l6krgnyrx91q3gif9fw2f2cgy2fnmm5kuv61kld6zc1hwbynws9hk2bp3fbymb5dfmr116c4e1bd0u7ogwyw87s85ys8ekqz4b4vygsrxvqkdn7mger3vlhk75skoqkwddigqm1zkyo6j6rjz0wm3wpraabmrqxbca511bkmedkk3uvyulegtzzzxu0nxprjahd5krpkbyk0kutjat9edym7z8saabextv2i3kegbwe5capftdps0y6k1j6994d27mluipadvadkbm689jby0jxsscv86m3r6yilaaad5s0s3akt5cik6qxuwcz6aayirir77qbxi8r2b2f7mn0u69lz2mfl6it4i3bqbgeymhhkcevpr5bqopeyyr0nuajbkqz30kbl81s19suyyxjco0delf907gaijdt8jeui30x10fdoyl8yqxm5p2 == \y\q\r\f\7\d\v\v\a\p\l\z\t\o\1\s\f\h\j\e\1\f\3\v\q\b\m\g\a\d\x\d\b\9\y\4\j\0\8\q\d\f\v\g\b\v\f\t\v\a\q\d\k\t\p\k\h\3\0\n\k\7\9\h\o\y\w\q\k\1\l\6\k\r\g\n\y\r\x\9\1\q\3\g\i\f\9\f\w\2\f\2\c\g\y\2\f\n\m\m\5\k\u\v\6\1\k\l\d\6\z\c\1\h\w\b\y\n\w\s\9\h\k\2\b\p\3\f\b\y\m\b\5\d\f\m\r\1\1\6\c\4\e\1\b\d\0\u\7\o\g\w\y\w\8\7\s\8\5\y\s\8\e\k\q\z\4\b\4\v\y\g\s\r\x\v\q\k\d\n\7\m\g\e\r\3\v\l\h\k\7\5\s\k\o\q\k\w\d\d\i\g\q\m\1\z\k\y\o\6\j\6\r\j\z\0\w\m\3\w\p\r\a\a\b\m\r\q\x\b\c\a\5\1\1\b\k\m\e\d\k\k\3\u\v\y\u\l\e\g\t\z\z\z\x\u\0\n\x\p\r\j\a\h\d\5\k\r\p\k\b\y\k\0\k\u\t\j\a\t\9\e\d\y\m\7\z\8\s\a\a\b\e\x\t\v\2\i\3\k\e\g\b\w\e\5\c\a\p\f\t\d\p\s\0\y\6\k\1\j\6\9\9\4\d\2\7\m\l\u\i\p\a\d\v\a\d\k\b\m\6\8\9\j\b\y\0\j\x\s\s\c\v\8\6\m\3\r\6\y\i\l\a\a\a\d\5\s\0\s\3\a\k\t\5\c\i\k\6\q\x\u\w\c\z\6\a\a\y\i\r\i\r\7\7\q\b\x\i\8\r\2\b\2\f\7\m\n\0\u\6\9\l\z\2\m\f\l\6\i\t\4\i\3\b\q\b\g\e\y\m\h\h\k\c\e\v\p\r\5\b\q\o\p\e\y\y\r\0\n\u\a\j\b\k\q\z\3\0\k\b\l\8\1\s\1\9\s\u\y\y\x\j\c\o\0\d\e\l\f\9\0\7\g\a\i\j\d\t\8\j\e\u\i\3\0\x\1\0\f\d\o\y\l\8\y\q\x\m\5\p\2 ]] 00:09:36.817 11:12:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:36.817 11:12:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:36.817 [2024-12-10 11:12:43.486521] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:36.817 [2024-12-10 11:12:43.487050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62712 ] 00:09:37.076 [2024-12-10 11:12:43.682551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.076 [2024-12-10 11:12:43.872824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.334 [2024-12-10 11:12:44.146547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:37.592  [2024-12-10T11:12:45.843Z] Copying: 512/512 [B] (average 500 kBps) 00:09:39.017 00:09:39.017 11:12:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yqrf7dvvaplzto1sfhje1f3vqbmgadxdb9y4j08qdfvgbvftvaqdktpkh30nk79hoywqk1l6krgnyrx91q3gif9fw2f2cgy2fnmm5kuv61kld6zc1hwbynws9hk2bp3fbymb5dfmr116c4e1bd0u7ogwyw87s85ys8ekqz4b4vygsrxvqkdn7mger3vlhk75skoqkwddigqm1zkyo6j6rjz0wm3wpraabmrqxbca511bkmedkk3uvyulegtzzzxu0nxprjahd5krpkbyk0kutjat9edym7z8saabextv2i3kegbwe5capftdps0y6k1j6994d27mluipadvadkbm689jby0jxsscv86m3r6yilaaad5s0s3akt5cik6qxuwcz6aayirir77qbxi8r2b2f7mn0u69lz2mfl6it4i3bqbgeymhhkcevpr5bqopeyyr0nuajbkqz30kbl81s19suyyxjco0delf907gaijdt8jeui30x10fdoyl8yqxm5p2 == \y\q\r\f\7\d\v\v\a\p\l\z\t\o\1\s\f\h\j\e\1\f\3\v\q\b\m\g\a\d\x\d\b\9\y\4\j\0\8\q\d\f\v\g\b\v\f\t\v\a\q\d\k\t\p\k\h\3\0\n\k\7\9\h\o\y\w\q\k\1\l\6\k\r\g\n\y\r\x\9\1\q\3\g\i\f\9\f\w\2\f\2\c\g\y\2\f\n\m\m\5\k\u\v\6\1\k\l\d\6\z\c\1\h\w\b\y\n\w\s\9\h\k\2\b\p\3\f\b\y\m\b\5\d\f\m\r\1\1\6\c\4\e\1\b\d\0\u\7\o\g\w\y\w\8\7\s\8\5\y\s\8\e\k\q\z\4\b\4\v\y\g\s\r\x\v\q\k\d\n\7\m\g\e\r\3\v\l\h\k\7\5\s\k\o\q\k\w\d\d\i\g\q\m\1\z\k\y\o\6\j\6\r\j\z\0\w\m\3\w\p\r\a\a\b\m\r\q\x\b\c\a\5\1\1\b\k\m\e\d\k\k\3\u\v\y\u\l\e\g\t\z\z\z\x\u\0\n\x\p\r\j\a\h\d\5\k\r\p\k\b\y\k\0\k\u\t\j\a\t\9\e\d\y\m\7\z\8\s\a\a\b\e\x\t\v\2\i\3\k\e\g\b\w\e\5\c\a\p\f\t\d\p\s\0\y\6\k\1\j\6\9\9\4\d\2\7\m\l\u\i\p\a\d\v\a\d\k\b\m\6\8\9\j\b\y\0\j\x\s\s\c\v\8\6\m\3\r\6\y\i\l\a\a\a\d\5\s\0\s\3\a\k\t\5\c\i\k\6\q\x\u\w\c\z\6\a\a\y\i\r\i\r\7\7\q\b\x\i\8\r\2\b\2\f\7\m\n\0\u\6\9\l\z\2\m\f\l\6\i\t\4\i\3\b\q\b\g\e\y\m\h\h\k\c\e\v\p\r\5\b\q\o\p\e\y\y\r\0\n\u\a\j\b\k\q\z\3\0\k\b\l\8\1\s\1\9\s\u\y\y\x\j\c\o\0\d\e\l\f\9\0\7\g\a\i\j\d\t\8\j\e\u\i\3\0\x\1\0\f\d\o\y\l\8\y\q\x\m\5\p\2 ]] 00:09:39.017 11:12:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:39.017 11:12:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:39.275 [2024-12-10 11:12:45.949967] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:39.275 [2024-12-10 11:12:45.950580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62737 ] 00:09:39.534 [2024-12-10 11:12:46.148311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.534 [2024-12-10 11:12:46.332467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.792 [2024-12-10 11:12:46.597768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.051  [2024-12-10T11:12:48.255Z] Copying: 512/512 [B] (average 500 kBps) 00:09:41.429 00:09:41.429 11:12:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yqrf7dvvaplzto1sfhje1f3vqbmgadxdb9y4j08qdfvgbvftvaqdktpkh30nk79hoywqk1l6krgnyrx91q3gif9fw2f2cgy2fnmm5kuv61kld6zc1hwbynws9hk2bp3fbymb5dfmr116c4e1bd0u7ogwyw87s85ys8ekqz4b4vygsrxvqkdn7mger3vlhk75skoqkwddigqm1zkyo6j6rjz0wm3wpraabmrqxbca511bkmedkk3uvyulegtzzzxu0nxprjahd5krpkbyk0kutjat9edym7z8saabextv2i3kegbwe5capftdps0y6k1j6994d27mluipadvadkbm689jby0jxsscv86m3r6yilaaad5s0s3akt5cik6qxuwcz6aayirir77qbxi8r2b2f7mn0u69lz2mfl6it4i3bqbgeymhhkcevpr5bqopeyyr0nuajbkqz30kbl81s19suyyxjco0delf907gaijdt8jeui30x10fdoyl8yqxm5p2 == \y\q\r\f\7\d\v\v\a\p\l\z\t\o\1\s\f\h\j\e\1\f\3\v\q\b\m\g\a\d\x\d\b\9\y\4\j\0\8\q\d\f\v\g\b\v\f\t\v\a\q\d\k\t\p\k\h\3\0\n\k\7\9\h\o\y\w\q\k\1\l\6\k\r\g\n\y\r\x\9\1\q\3\g\i\f\9\f\w\2\f\2\c\g\y\2\f\n\m\m\5\k\u\v\6\1\k\l\d\6\z\c\1\h\w\b\y\n\w\s\9\h\k\2\b\p\3\f\b\y\m\b\5\d\f\m\r\1\1\6\c\4\e\1\b\d\0\u\7\o\g\w\y\w\8\7\s\8\5\y\s\8\e\k\q\z\4\b\4\v\y\g\s\r\x\v\q\k\d\n\7\m\g\e\r\3\v\l\h\k\7\5\s\k\o\q\k\w\d\d\i\g\q\m\1\z\k\y\o\6\j\6\r\j\z\0\w\m\3\w\p\r\a\a\b\m\r\q\x\b\c\a\5\1\1\b\k\m\e\d\k\k\3\u\v\y\u\l\e\g\t\z\z\z\x\u\0\n\x\p\r\j\a\h\d\5\k\r\p\k\b\y\k\0\k\u\t\j\a\t\9\e\d\y\m\7\z\8\s\a\a\b\e\x\t\v\2\i\3\k\e\g\b\w\e\5\c\a\p\f\t\d\p\s\0\y\6\k\1\j\6\9\9\4\d\2\7\m\l\u\i\p\a\d\v\a\d\k\b\m\6\8\9\j\b\y\0\j\x\s\s\c\v\8\6\m\3\r\6\y\i\l\a\a\a\d\5\s\0\s\3\a\k\t\5\c\i\k\6\q\x\u\w\c\z\6\a\a\y\i\r\i\r\7\7\q\b\x\i\8\r\2\b\2\f\7\m\n\0\u\6\9\l\z\2\m\f\l\6\i\t\4\i\3\b\q\b\g\e\y\m\h\h\k\c\e\v\p\r\5\b\q\o\p\e\y\y\r\0\n\u\a\j\b\k\q\z\3\0\k\b\l\8\1\s\1\9\s\u\y\y\x\j\c\o\0\d\e\l\f\9\0\7\g\a\i\j\d\t\8\j\e\u\i\3\0\x\1\0\f\d\o\y\l\8\y\q\x\m\5\p\2 ]] 00:09:41.429 11:12:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:41.429 11:12:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:41.688 [2024-12-10 11:12:48.278483] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:41.688 [2024-12-10 11:12:48.278711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62768 ] 00:09:41.688 [2024-12-10 11:12:48.489176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.947 [2024-12-10 11:12:48.663147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.205 [2024-12-10 11:12:48.847830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.205  [2024-12-10T11:12:50.408Z] Copying: 512/512 [B] (average 250 kBps) 00:09:43.582 00:09:43.582 11:12:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yqrf7dvvaplzto1sfhje1f3vqbmgadxdb9y4j08qdfvgbvftvaqdktpkh30nk79hoywqk1l6krgnyrx91q3gif9fw2f2cgy2fnmm5kuv61kld6zc1hwbynws9hk2bp3fbymb5dfmr116c4e1bd0u7ogwyw87s85ys8ekqz4b4vygsrxvqkdn7mger3vlhk75skoqkwddigqm1zkyo6j6rjz0wm3wpraabmrqxbca511bkmedkk3uvyulegtzzzxu0nxprjahd5krpkbyk0kutjat9edym7z8saabextv2i3kegbwe5capftdps0y6k1j6994d27mluipadvadkbm689jby0jxsscv86m3r6yilaaad5s0s3akt5cik6qxuwcz6aayirir77qbxi8r2b2f7mn0u69lz2mfl6it4i3bqbgeymhhkcevpr5bqopeyyr0nuajbkqz30kbl81s19suyyxjco0delf907gaijdt8jeui30x10fdoyl8yqxm5p2 == \y\q\r\f\7\d\v\v\a\p\l\z\t\o\1\s\f\h\j\e\1\f\3\v\q\b\m\g\a\d\x\d\b\9\y\4\j\0\8\q\d\f\v\g\b\v\f\t\v\a\q\d\k\t\p\k\h\3\0\n\k\7\9\h\o\y\w\q\k\1\l\6\k\r\g\n\y\r\x\9\1\q\3\g\i\f\9\f\w\2\f\2\c\g\y\2\f\n\m\m\5\k\u\v\6\1\k\l\d\6\z\c\1\h\w\b\y\n\w\s\9\h\k\2\b\p\3\f\b\y\m\b\5\d\f\m\r\1\1\6\c\4\e\1\b\d\0\u\7\o\g\w\y\w\8\7\s\8\5\y\s\8\e\k\q\z\4\b\4\v\y\g\s\r\x\v\q\k\d\n\7\m\g\e\r\3\v\l\h\k\7\5\s\k\o\q\k\w\d\d\i\g\q\m\1\z\k\y\o\6\j\6\r\j\z\0\w\m\3\w\p\r\a\a\b\m\r\q\x\b\c\a\5\1\1\b\k\m\e\d\k\k\3\u\v\y\u\l\e\g\t\z\z\z\x\u\0\n\x\p\r\j\a\h\d\5\k\r\p\k\b\y\k\0\k\u\t\j\a\t\9\e\d\y\m\7\z\8\s\a\a\b\e\x\t\v\2\i\3\k\e\g\b\w\e\5\c\a\p\f\t\d\p\s\0\y\6\k\1\j\6\9\9\4\d\2\7\m\l\u\i\p\a\d\v\a\d\k\b\m\6\8\9\j\b\y\0\j\x\s\s\c\v\8\6\m\3\r\6\y\i\l\a\a\a\d\5\s\0\s\3\a\k\t\5\c\i\k\6\q\x\u\w\c\z\6\a\a\y\i\r\i\r\7\7\q\b\x\i\8\r\2\b\2\f\7\m\n\0\u\6\9\l\z\2\m\f\l\6\i\t\4\i\3\b\q\b\g\e\y\m\h\h\k\c\e\v\p\r\5\b\q\o\p\e\y\y\r\0\n\u\a\j\b\k\q\z\3\0\k\b\l\8\1\s\1\9\s\u\y\y\x\j\c\o\0\d\e\l\f\9\0\7\g\a\i\j\d\t\8\j\e\u\i\3\0\x\1\0\f\d\o\y\l\8\y\q\x\m\5\p\2 ]] 00:09:43.582 00:09:43.582 real 0m17.588s 00:09:43.582 user 0m14.297s 00:09:43.582 sys 0m1.923s 00:09:43.582 11:12:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.582 11:12:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:43.582 ************************************ 00:09:43.582 END TEST dd_flags_misc_forced_aio 00:09:43.582 ************************************ 00:09:43.582 11:12:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:09:43.582 11:12:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:43.582 11:12:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:43.582 ************************************ 00:09:43.582 END TEST spdk_dd_posix 00:09:43.582 ************************************ 00:09:43.582 00:09:43.582 real 1m5.867s 00:09:43.582 user 0m52.182s 00:09:43.582 sys 0m16.971s 00:09:43.582 11:12:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.582 11:12:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:43.582 11:12:50 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:43.582 11:12:50 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.582 11:12:50 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.582 11:12:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:43.582 ************************************ 00:09:43.582 START TEST spdk_dd_malloc 00:09:43.582 ************************************ 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:43.582 * Looking for test storage... 00:09:43.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:43.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.582 --rc genhtml_branch_coverage=1 00:09:43.582 --rc genhtml_function_coverage=1 00:09:43.582 --rc genhtml_legend=1 00:09:43.582 --rc geninfo_all_blocks=1 00:09:43.582 --rc geninfo_unexecuted_blocks=1 00:09:43.582 00:09:43.582 ' 00:09:43.582 11:12:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:43.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.583 --rc genhtml_branch_coverage=1 00:09:43.583 --rc genhtml_function_coverage=1 00:09:43.583 --rc genhtml_legend=1 00:09:43.583 --rc geninfo_all_blocks=1 00:09:43.583 --rc geninfo_unexecuted_blocks=1 00:09:43.583 00:09:43.583 ' 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:43.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.583 --rc genhtml_branch_coverage=1 00:09:43.583 --rc genhtml_function_coverage=1 00:09:43.583 --rc genhtml_legend=1 00:09:43.583 --rc geninfo_all_blocks=1 00:09:43.583 --rc geninfo_unexecuted_blocks=1 00:09:43.583 00:09:43.583 ' 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:43.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.583 --rc genhtml_branch_coverage=1 00:09:43.583 --rc genhtml_function_coverage=1 00:09:43.583 --rc genhtml_legend=1 00:09:43.583 --rc geninfo_all_blocks=1 00:09:43.583 --rc geninfo_unexecuted_blocks=1 00:09:43.583 00:09:43.583 ' 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:43.583 ************************************ 00:09:43.583 START TEST dd_malloc_copy 00:09:43.583 ************************************ 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:43.583 11:12:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:43.842 { 00:09:43.842 "subsystems": [ 00:09:43.842 { 00:09:43.842 "subsystem": "bdev", 00:09:43.842 "config": [ 00:09:43.842 { 00:09:43.842 "params": { 00:09:43.842 "block_size": 512, 00:09:43.842 "num_blocks": 1048576, 00:09:43.842 "name": "malloc0" 00:09:43.842 }, 00:09:43.842 "method": "bdev_malloc_create" 00:09:43.842 }, 00:09:43.842 { 00:09:43.842 "params": { 00:09:43.842 "block_size": 512, 00:09:43.842 "num_blocks": 1048576, 00:09:43.842 "name": "malloc1" 00:09:43.842 }, 00:09:43.842 "method": "bdev_malloc_create" 00:09:43.842 }, 00:09:43.842 { 00:09:43.842 "method": "bdev_wait_for_examine" 00:09:43.842 } 00:09:43.842 ] 00:09:43.842 } 00:09:43.842 ] 00:09:43.842 } 00:09:43.842 [2024-12-10 11:12:50.550819] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:43.842 [2024-12-10 11:12:50.551247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62862 ] 00:09:44.100 [2024-12-10 11:12:50.751122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.359 [2024-12-10 11:12:50.936420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.359 [2024-12-10 11:12:51.149293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:46.890  [2024-12-10T11:12:54.652Z] Copying: 147/512 [MB] (147 MBps) [2024-12-10T11:12:55.597Z] Copying: 281/512 [MB] (134 MBps) [2024-12-10T11:12:56.161Z] Copying: 430/512 [MB] (149 MBps) [2024-12-10T11:13:00.347Z] Copying: 512/512 [MB] (average 138 MBps) 00:09:53.521 00:09:53.521 11:12:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:09:53.521 11:12:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:09:53.521 11:12:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:53.521 11:12:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:53.521 { 00:09:53.521 "subsystems": [ 00:09:53.521 { 00:09:53.521 "subsystem": "bdev", 00:09:53.521 "config": [ 00:09:53.521 { 00:09:53.521 "params": { 00:09:53.521 "block_size": 512, 00:09:53.521 "num_blocks": 1048576, 00:09:53.521 "name": "malloc0" 00:09:53.521 }, 00:09:53.521 "method": "bdev_malloc_create" 00:09:53.521 }, 00:09:53.521 { 00:09:53.521 "params": { 00:09:53.521 "block_size": 512, 00:09:53.521 "num_blocks": 1048576, 00:09:53.521 "name": "malloc1" 00:09:53.521 }, 00:09:53.521 "method": "bdev_malloc_create" 00:09:53.521 }, 00:09:53.521 { 00:09:53.521 "method": "bdev_wait_for_examine" 00:09:53.521 } 00:09:53.521 ] 00:09:53.521 } 00:09:53.521 ] 00:09:53.521 } 00:09:53.522 [2024-12-10 11:13:00.085416] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:09:53.522 [2024-12-10 11:13:00.085679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62971 ] 00:09:53.522 [2024-12-10 11:13:00.263951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.779 [2024-12-10 11:13:00.438856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.037 [2024-12-10 11:13:00.626266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:55.936  [2024-12-10T11:13:04.140Z] Copying: 153/512 [MB] (153 MBps) [2024-12-10T11:13:04.715Z] Copying: 306/512 [MB] (153 MBps) [2024-12-10T11:13:05.281Z] Copying: 458/512 [MB] (152 MBps) [2024-12-10T11:13:09.467Z] Copying: 512/512 [MB] (average 151 MBps) 00:10:02.641 00:10:02.641 ************************************ 00:10:02.641 END TEST dd_malloc_copy 00:10:02.641 ************************************ 00:10:02.641 00:10:02.641 real 0m18.240s 00:10:02.641 user 0m17.016s 00:10:02.641 sys 0m0.953s 00:10:02.641 11:13:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.641 11:13:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:10:02.641 ************************************ 00:10:02.641 END TEST spdk_dd_malloc 00:10:02.641 ************************************ 00:10:02.641 00:10:02.641 real 0m18.530s 00:10:02.641 user 0m17.169s 00:10:02.641 sys 0m1.073s 00:10:02.641 11:13:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.641 11:13:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:10:02.641 11:13:08 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:10:02.641 11:13:08 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:02.641 11:13:08 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.641 11:13:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:02.641 ************************************ 00:10:02.641 START TEST spdk_dd_bdev_to_bdev 00:10:02.641 ************************************ 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:10:02.641 * Looking for test storage... 00:10:02.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:10:02.641 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:02.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.642 --rc genhtml_branch_coverage=1 00:10:02.642 --rc genhtml_function_coverage=1 00:10:02.642 --rc genhtml_legend=1 00:10:02.642 --rc geninfo_all_blocks=1 00:10:02.642 --rc geninfo_unexecuted_blocks=1 00:10:02.642 00:10:02.642 ' 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:02.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.642 --rc genhtml_branch_coverage=1 00:10:02.642 --rc genhtml_function_coverage=1 00:10:02.642 --rc genhtml_legend=1 00:10:02.642 --rc geninfo_all_blocks=1 00:10:02.642 --rc geninfo_unexecuted_blocks=1 00:10:02.642 00:10:02.642 ' 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:02.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.642 --rc genhtml_branch_coverage=1 00:10:02.642 --rc genhtml_function_coverage=1 00:10:02.642 --rc genhtml_legend=1 00:10:02.642 --rc geninfo_all_blocks=1 00:10:02.642 --rc geninfo_unexecuted_blocks=1 00:10:02.642 00:10:02.642 ' 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:02.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.642 --rc genhtml_branch_coverage=1 00:10:02.642 --rc genhtml_function_coverage=1 00:10:02.642 --rc genhtml_legend=1 00:10:02.642 --rc geninfo_all_blocks=1 00:10:02.642 --rc geninfo_unexecuted_blocks=1 00:10:02.642 00:10:02.642 ' 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:02.642 ************************************ 00:10:02.642 START TEST dd_inflate_file 00:10:02.642 ************************************ 00:10:02.642 11:13:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:10:02.642 [2024-12-10 11:13:09.035904] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:02.642 [2024-12-10 11:13:09.036273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63140 ] 00:10:02.642 [2024-12-10 11:13:09.209465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.642 [2024-12-10 11:13:09.376231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.900 [2024-12-10 11:13:09.557677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:02.900  [2024-12-10T11:13:10.718Z] Copying: 64/64 [MB] (average 1333 MBps) 00:10:03.892 00:10:04.151 00:10:04.151 real 0m1.800s 00:10:04.151 user 0m1.492s 00:10:04.151 sys 0m1.029s 00:10:04.151 11:13:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.151 11:13:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:10:04.151 ************************************ 00:10:04.151 END TEST dd_inflate_file 00:10:04.151 ************************************ 00:10:04.151 11:13:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:10:04.151 11:13:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:10:04.151 11:13:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:10:04.151 11:13:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:10:04.151 11:13:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:04.151 11:13:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:04.151 11:13:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:04.151 11:13:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.151 11:13:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:04.151 ************************************ 00:10:04.151 START TEST dd_copy_to_out_bdev 00:10:04.151 ************************************ 00:10:04.151 11:13:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:10:04.151 { 00:10:04.151 "subsystems": [ 00:10:04.151 { 00:10:04.151 "subsystem": "bdev", 00:10:04.151 "config": [ 00:10:04.151 { 00:10:04.151 "params": { 00:10:04.151 "trtype": "pcie", 00:10:04.151 "traddr": "0000:00:10.0", 00:10:04.151 "name": "Nvme0" 00:10:04.151 }, 00:10:04.151 "method": "bdev_nvme_attach_controller" 00:10:04.151 }, 00:10:04.151 { 00:10:04.151 "params": { 00:10:04.151 "trtype": "pcie", 00:10:04.151 "traddr": "0000:00:11.0", 00:10:04.151 "name": "Nvme1" 00:10:04.151 }, 00:10:04.151 "method": "bdev_nvme_attach_controller" 00:10:04.151 }, 00:10:04.151 { 00:10:04.151 "method": "bdev_wait_for_examine" 00:10:04.151 } 00:10:04.151 ] 00:10:04.151 } 00:10:04.151 ] 00:10:04.151 } 00:10:04.151 [2024-12-10 11:13:10.936272] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:04.151 [2024-12-10 11:13:10.937130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63193 ] 00:10:04.416 [2024-12-10 11:13:11.131838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.416 [2024-12-10 11:13:11.235439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.674 [2024-12-10 11:13:11.495642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:06.050  [2024-12-10T11:13:13.813Z] Copying: 64/64 [MB] (average 64 MBps) 00:10:06.987 00:10:06.987 00:10:06.987 real 0m3.021s 00:10:06.987 user 0m2.681s 00:10:06.987 sys 0m2.040s 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:07.247 ************************************ 00:10:07.247 END TEST dd_copy_to_out_bdev 00:10:07.247 ************************************ 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:07.247 ************************************ 00:10:07.247 START TEST dd_offset_magic 00:10:07.247 ************************************ 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:07.247 11:13:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:07.247 { 00:10:07.247 "subsystems": [ 00:10:07.247 { 00:10:07.247 "subsystem": "bdev", 00:10:07.247 "config": [ 00:10:07.247 { 00:10:07.247 "params": { 00:10:07.247 "trtype": "pcie", 00:10:07.247 "traddr": "0000:00:10.0", 00:10:07.247 "name": "Nvme0" 00:10:07.247 }, 00:10:07.247 "method": "bdev_nvme_attach_controller" 00:10:07.247 }, 00:10:07.247 { 00:10:07.247 "params": { 00:10:07.247 "trtype": "pcie", 00:10:07.247 "traddr": "0000:00:11.0", 00:10:07.247 "name": "Nvme1" 00:10:07.247 }, 00:10:07.247 "method": "bdev_nvme_attach_controller" 00:10:07.247 }, 00:10:07.247 { 00:10:07.247 "method": "bdev_wait_for_examine" 00:10:07.247 } 00:10:07.247 ] 00:10:07.247 } 00:10:07.247 ] 00:10:07.247 } 00:10:07.247 [2024-12-10 11:13:13.974950] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:07.247 [2024-12-10 11:13:13.975128] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63248 ] 00:10:07.506 [2024-12-10 11:13:14.167154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.506 [2024-12-10 11:13:14.292400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.764 [2024-12-10 11:13:14.490859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:08.022  [2024-12-10T11:13:15.785Z] Copying: 65/65 [MB] (average 1250 MBps) 00:10:08.959 00:10:08.959 11:13:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:10:08.959 11:13:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:10:08.959 11:13:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:08.959 11:13:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:09.218 { 00:10:09.218 "subsystems": [ 00:10:09.218 { 00:10:09.218 "subsystem": "bdev", 00:10:09.218 "config": [ 00:10:09.218 { 00:10:09.218 "params": { 00:10:09.218 "trtype": "pcie", 00:10:09.218 "traddr": "0000:00:10.0", 00:10:09.218 "name": "Nvme0" 00:10:09.218 }, 00:10:09.218 "method": "bdev_nvme_attach_controller" 00:10:09.218 }, 00:10:09.218 { 00:10:09.218 "params": { 00:10:09.218 "trtype": "pcie", 00:10:09.218 "traddr": "0000:00:11.0", 00:10:09.218 "name": "Nvme1" 00:10:09.218 }, 00:10:09.218 "method": "bdev_nvme_attach_controller" 00:10:09.218 }, 00:10:09.218 { 00:10:09.218 "method": "bdev_wait_for_examine" 00:10:09.218 } 00:10:09.218 ] 00:10:09.218 } 00:10:09.218 ] 00:10:09.218 } 00:10:09.218 [2024-12-10 11:13:15.832600] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:09.218 [2024-12-10 11:13:15.832761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63280 ] 00:10:09.218 [2024-12-10 11:13:16.002171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.477 [2024-12-10 11:13:16.105449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.477 [2024-12-10 11:13:16.285665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:09.734  [2024-12-10T11:13:17.936Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:11.110 00:10:11.110 11:13:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:10:11.110 11:13:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:10:11.110 11:13:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:10:11.110 11:13:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:10:11.110 11:13:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:10:11.110 11:13:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:11.110 11:13:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:11.110 { 00:10:11.110 "subsystems": [ 00:10:11.110 { 00:10:11.110 "subsystem": "bdev", 00:10:11.110 "config": [ 00:10:11.110 { 00:10:11.110 "params": { 00:10:11.110 "trtype": "pcie", 00:10:11.110 "traddr": "0000:00:10.0", 00:10:11.110 "name": "Nvme0" 00:10:11.110 }, 00:10:11.110 "method": "bdev_nvme_attach_controller" 00:10:11.110 }, 00:10:11.110 { 00:10:11.110 "params": { 00:10:11.110 "trtype": "pcie", 00:10:11.110 "traddr": "0000:00:11.0", 00:10:11.110 "name": "Nvme1" 00:10:11.110 }, 00:10:11.110 "method": "bdev_nvme_attach_controller" 00:10:11.110 }, 00:10:11.110 { 00:10:11.110 "method": "bdev_wait_for_examine" 00:10:11.110 } 00:10:11.110 ] 00:10:11.110 } 00:10:11.110 ] 00:10:11.110 } 00:10:11.110 [2024-12-10 11:13:17.640025] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:11.110 [2024-12-10 11:13:17.640195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63309 ] 00:10:11.110 [2024-12-10 11:13:17.814436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.110 [2024-12-10 11:13:17.920955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.368 [2024-12-10 11:13:18.101897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:11.626  [2024-12-10T11:13:19.444Z] Copying: 65/65 [MB] (average 1160 MBps) 00:10:12.618 00:10:12.618 11:13:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:10:12.618 11:13:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:10:12.618 11:13:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:12.618 11:13:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:12.618 { 00:10:12.618 "subsystems": [ 00:10:12.618 { 00:10:12.618 "subsystem": "bdev", 00:10:12.618 "config": [ 00:10:12.618 { 00:10:12.618 "params": { 00:10:12.618 "trtype": "pcie", 00:10:12.618 "traddr": "0000:00:10.0", 00:10:12.618 "name": "Nvme0" 00:10:12.618 }, 00:10:12.618 "method": "bdev_nvme_attach_controller" 00:10:12.618 }, 00:10:12.618 { 00:10:12.618 "params": { 00:10:12.618 "trtype": "pcie", 00:10:12.618 "traddr": "0000:00:11.0", 00:10:12.618 "name": "Nvme1" 00:10:12.618 }, 00:10:12.618 "method": "bdev_nvme_attach_controller" 00:10:12.618 }, 00:10:12.618 { 00:10:12.618 "method": "bdev_wait_for_examine" 00:10:12.618 } 00:10:12.618 ] 00:10:12.618 } 00:10:12.618 ] 00:10:12.618 } 00:10:12.618 [2024-12-10 11:13:19.372709] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:12.618 [2024-12-10 11:13:19.372885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63335 ] 00:10:12.876 [2024-12-10 11:13:19.557743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.876 [2024-12-10 11:13:19.660781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.135 [2024-12-10 11:13:19.845277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:13.394  [2024-12-10T11:13:21.156Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:10:14.330 00:10:14.330 11:13:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:10:14.330 11:13:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:10:14.330 00:10:14.330 real 0m7.246s 00:10:14.330 user 0m6.172s 00:10:14.330 sys 0m2.368s 00:10:14.330 11:13:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.330 11:13:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:14.330 ************************************ 00:10:14.330 END TEST dd_offset_magic 00:10:14.330 ************************************ 00:10:14.330 11:13:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:10:14.330 11:13:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:10:14.330 11:13:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:14.330 11:13:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:10:14.330 11:13:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:10:14.330 11:13:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:10:14.330 11:13:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:10:14.330 11:13:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:10:14.330 11:13:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:10:14.330 11:13:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:14.330 11:13:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:14.588 { 00:10:14.588 "subsystems": [ 00:10:14.588 { 00:10:14.588 "subsystem": "bdev", 00:10:14.588 "config": [ 00:10:14.588 { 00:10:14.588 "params": { 00:10:14.588 "trtype": "pcie", 00:10:14.588 "traddr": "0000:00:10.0", 00:10:14.588 "name": "Nvme0" 00:10:14.588 }, 00:10:14.588 "method": "bdev_nvme_attach_controller" 00:10:14.588 }, 00:10:14.588 { 00:10:14.588 "params": { 00:10:14.589 "trtype": "pcie", 00:10:14.589 "traddr": "0000:00:11.0", 00:10:14.589 "name": "Nvme1" 00:10:14.589 }, 00:10:14.589 "method": "bdev_nvme_attach_controller" 00:10:14.589 }, 00:10:14.589 { 00:10:14.589 "method": "bdev_wait_for_examine" 00:10:14.589 } 00:10:14.589 ] 00:10:14.589 } 00:10:14.589 ] 00:10:14.589 } 00:10:14.589 [2024-12-10 11:13:21.265497] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:14.589 [2024-12-10 11:13:21.265653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63386 ] 00:10:14.847 [2024-12-10 11:13:21.439748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.847 [2024-12-10 11:13:21.543039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.105 [2024-12-10 11:13:21.726304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:15.364  [2024-12-10T11:13:23.127Z] Copying: 5120/5120 [kB] (average 1666 MBps) 00:10:16.301 00:10:16.301 11:13:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:10:16.301 11:13:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:10:16.301 11:13:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:10:16.301 11:13:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:10:16.301 11:13:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:10:16.301 11:13:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:10:16.301 11:13:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:10:16.301 11:13:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:10:16.301 11:13:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:16.301 11:13:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:16.301 { 00:10:16.301 "subsystems": [ 00:10:16.301 { 00:10:16.301 "subsystem": "bdev", 00:10:16.301 "config": [ 00:10:16.301 { 00:10:16.301 "params": { 00:10:16.301 "trtype": "pcie", 00:10:16.301 "traddr": "0000:00:10.0", 00:10:16.301 "name": "Nvme0" 00:10:16.301 }, 00:10:16.301 "method": "bdev_nvme_attach_controller" 00:10:16.301 }, 00:10:16.301 { 00:10:16.301 "params": { 00:10:16.301 "trtype": "pcie", 00:10:16.301 "traddr": "0000:00:11.0", 00:10:16.301 "name": "Nvme1" 00:10:16.301 }, 00:10:16.301 "method": "bdev_nvme_attach_controller" 00:10:16.301 }, 00:10:16.301 { 00:10:16.301 "method": "bdev_wait_for_examine" 00:10:16.301 } 00:10:16.301 ] 00:10:16.301 } 00:10:16.301 ] 00:10:16.301 } 00:10:16.301 [2024-12-10 11:13:22.918393] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:16.301 [2024-12-10 11:13:22.918594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63414 ] 00:10:16.301 [2024-12-10 11:13:23.092242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.560 [2024-12-10 11:13:23.203627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.818 [2024-12-10 11:13:23.393763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:17.077  [2024-12-10T11:13:24.838Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:10:18.012 00:10:18.012 11:13:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:10:18.012 00:10:18.012 real 0m15.972s 00:10:18.012 user 0m13.515s 00:10:18.012 sys 0m7.376s 00:10:18.012 11:13:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.012 11:13:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:18.012 ************************************ 00:10:18.012 END TEST spdk_dd_bdev_to_bdev 00:10:18.012 ************************************ 00:10:18.012 11:13:24 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:10:18.012 11:13:24 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:10:18.012 11:13:24 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.012 11:13:24 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.012 11:13:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:18.012 ************************************ 00:10:18.012 START TEST spdk_dd_uring 00:10:18.012 ************************************ 00:10:18.012 11:13:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:10:18.012 * Looking for test storage... 00:10:18.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:18.012 11:13:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:18.012 11:13:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:10:18.012 11:13:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:18.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.272 --rc genhtml_branch_coverage=1 00:10:18.272 --rc genhtml_function_coverage=1 00:10:18.272 --rc genhtml_legend=1 00:10:18.272 --rc geninfo_all_blocks=1 00:10:18.272 --rc geninfo_unexecuted_blocks=1 00:10:18.272 00:10:18.272 ' 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:18.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.272 --rc genhtml_branch_coverage=1 00:10:18.272 --rc genhtml_function_coverage=1 00:10:18.272 --rc genhtml_legend=1 00:10:18.272 --rc geninfo_all_blocks=1 00:10:18.272 --rc geninfo_unexecuted_blocks=1 00:10:18.272 00:10:18.272 ' 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:18.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.272 --rc genhtml_branch_coverage=1 00:10:18.272 --rc genhtml_function_coverage=1 00:10:18.272 --rc genhtml_legend=1 00:10:18.272 --rc geninfo_all_blocks=1 00:10:18.272 --rc geninfo_unexecuted_blocks=1 00:10:18.272 00:10:18.272 ' 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:18.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.272 --rc genhtml_branch_coverage=1 00:10:18.272 --rc genhtml_function_coverage=1 00:10:18.272 --rc genhtml_legend=1 00:10:18.272 --rc geninfo_all_blocks=1 00:10:18.272 --rc geninfo_unexecuted_blocks=1 00:10:18.272 00:10:18.272 ' 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:10:18.272 ************************************ 00:10:18.272 START TEST dd_uring_copy 00:10:18.272 ************************************ 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:10:18.272 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:10:18.273 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:10:18.273 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:18.273 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=kyaulonylc5efe20thu8t479lvd0a968ix1fre08vrqyp8rprvbvvpziekul2yojwbjcc04se6bkbf07ri5cbqszvgl0bsflb6s62xdc6erylb8j0dnfx08bhsee9kcmw7exyo09khujpqzj5g33b5f9vv7an76fq505pc71837klxua21msxko94jfpo7bseadosckaet95fdwyxizyrslhi114osm0hp3w9gvn9wd14vww4o8fgwwa687svx6l7o08koqdh1m34d2dmixre3z9y5mh1vh4pkr9gg0nu2d6fsz481f6ymqpmsh9uygcsvlc0b8klwmxw5y8czq9ldt6t6k3nvw7bm9lev492x81lavxckumf9sbsavexioeciaqtmfeuyzwlpg1gm8uxsz5nyvbgzbe8r1gfijvi8kltf4dlqdm882z3bs6k8v1810y5xnch1e6hjw00923ed7cvas14sg7tiukvido2j441cgtcq7fn1rj085emui66yb3x3yakznne5xd8snhdd76vs7nhxcx9xhwnc9jg5sn5t7gccneyy0996ek1kc708bojijayd4vfmj282mra32pwwpp7zd2ivyaw4opknwobw1dpa6xtx3hwo9tfgwa22mui68m9vnsxyzk29xwxi917xy21ogkf16hont805pcn2rgc20pnaotyo9r31755mwoqibgal19m2mnjlpb5z3uxojslqni9ka8m90o77c6rzb452st9mil0sjlxy6vp2ulpwia1gazf61ra5rvxo33q42ni7uw38oi4pdklvrc7vcpb4lafzcfezwvfnh8e0e7ashk6lkkj4q17by1zsmjrklx2198tk21wchi14ueezuwsg5mkjvbf393lh6tkpq0blmfngin60l5cw6312y23xrpdxslbt0nacdk6fmhw3rednydzizsthxk9x8rnj14mpmadke6xiqm01qcsn8qmmgugmj94hd99bfu414c4yvh5gzmov2dxrce9le8 00:10:18.273 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo kyaulonylc5efe20thu8t479lvd0a968ix1fre08vrqyp8rprvbvvpziekul2yojwbjcc04se6bkbf07ri5cbqszvgl0bsflb6s62xdc6erylb8j0dnfx08bhsee9kcmw7exyo09khujpqzj5g33b5f9vv7an76fq505pc71837klxua21msxko94jfpo7bseadosckaet95fdwyxizyrslhi114osm0hp3w9gvn9wd14vww4o8fgwwa687svx6l7o08koqdh1m34d2dmixre3z9y5mh1vh4pkr9gg0nu2d6fsz481f6ymqpmsh9uygcsvlc0b8klwmxw5y8czq9ldt6t6k3nvw7bm9lev492x81lavxckumf9sbsavexioeciaqtmfeuyzwlpg1gm8uxsz5nyvbgzbe8r1gfijvi8kltf4dlqdm882z3bs6k8v1810y5xnch1e6hjw00923ed7cvas14sg7tiukvido2j441cgtcq7fn1rj085emui66yb3x3yakznne5xd8snhdd76vs7nhxcx9xhwnc9jg5sn5t7gccneyy0996ek1kc708bojijayd4vfmj282mra32pwwpp7zd2ivyaw4opknwobw1dpa6xtx3hwo9tfgwa22mui68m9vnsxyzk29xwxi917xy21ogkf16hont805pcn2rgc20pnaotyo9r31755mwoqibgal19m2mnjlpb5z3uxojslqni9ka8m90o77c6rzb452st9mil0sjlxy6vp2ulpwia1gazf61ra5rvxo33q42ni7uw38oi4pdklvrc7vcpb4lafzcfezwvfnh8e0e7ashk6lkkj4q17by1zsmjrklx2198tk21wchi14ueezuwsg5mkjvbf393lh6tkpq0blmfngin60l5cw6312y23xrpdxslbt0nacdk6fmhw3rednydzizsthxk9x8rnj14mpmadke6xiqm01qcsn8qmmgugmj94hd99bfu414c4yvh5gzmov2dxrce9le8 00:10:18.273 11:13:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:10:18.273 [2024-12-10 11:13:25.040714] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:18.273 [2024-12-10 11:13:25.040861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63504 ] 00:10:18.531 [2024-12-10 11:13:25.212583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.531 [2024-12-10 11:13:25.347559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.790 [2024-12-10 11:13:25.526150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:19.724  [2024-12-10T11:13:29.083Z] Copying: 511/511 [MB] (average 1347 MBps) 00:10:22.258 00:10:22.258 11:13:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:10:22.258 11:13:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:10:22.258 11:13:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:22.258 11:13:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:22.258 { 00:10:22.258 "subsystems": [ 00:10:22.258 { 00:10:22.258 "subsystem": "bdev", 00:10:22.258 "config": [ 00:10:22.258 { 00:10:22.258 "params": { 00:10:22.258 "block_size": 512, 00:10:22.258 "num_blocks": 1048576, 00:10:22.258 "name": "malloc0" 00:10:22.258 }, 00:10:22.258 "method": "bdev_malloc_create" 00:10:22.258 }, 00:10:22.258 { 00:10:22.258 "params": { 00:10:22.258 "filename": "/dev/zram1", 00:10:22.258 "name": "uring0" 00:10:22.258 }, 00:10:22.258 "method": "bdev_uring_create" 00:10:22.258 }, 00:10:22.258 { 00:10:22.258 "method": "bdev_wait_for_examine" 00:10:22.258 } 00:10:22.258 ] 00:10:22.258 } 00:10:22.258 ] 00:10:22.258 } 00:10:22.258 [2024-12-10 11:13:28.800490] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:22.258 [2024-12-10 11:13:28.800672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63547 ] 00:10:22.258 [2024-12-10 11:13:28.990787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.517 [2024-12-10 11:13:29.117261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.517 [2024-12-10 11:13:29.330275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:24.418  [2024-12-10T11:13:32.178Z] Copying: 157/512 [MB] (157 MBps) [2024-12-10T11:13:33.113Z] Copying: 302/512 [MB] (144 MBps) [2024-12-10T11:13:33.371Z] Copying: 455/512 [MB] (153 MBps) [2024-12-10T11:13:35.900Z] Copying: 512/512 [MB] (average 152 MBps) 00:10:29.074 00:10:29.074 11:13:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:10:29.074 11:13:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:10:29.074 11:13:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:29.074 11:13:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:29.074 { 00:10:29.074 "subsystems": [ 00:10:29.074 { 00:10:29.074 "subsystem": "bdev", 00:10:29.074 "config": [ 00:10:29.074 { 00:10:29.074 "params": { 00:10:29.074 "block_size": 512, 00:10:29.074 "num_blocks": 1048576, 00:10:29.074 "name": "malloc0" 00:10:29.074 }, 00:10:29.074 "method": "bdev_malloc_create" 00:10:29.074 }, 00:10:29.074 { 00:10:29.074 "params": { 00:10:29.074 "filename": "/dev/zram1", 00:10:29.074 "name": "uring0" 00:10:29.074 }, 00:10:29.074 "method": "bdev_uring_create" 00:10:29.074 }, 00:10:29.074 { 00:10:29.074 "method": "bdev_wait_for_examine" 00:10:29.074 } 00:10:29.074 ] 00:10:29.074 } 00:10:29.074 ] 00:10:29.074 } 00:10:29.332 [2024-12-10 11:13:35.952989] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:29.332 [2024-12-10 11:13:35.953154] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63632 ] 00:10:29.332 [2024-12-10 11:13:36.129963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.591 [2024-12-10 11:13:36.242857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.849 [2024-12-10 11:13:36.436545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:31.751  [2024-12-10T11:13:39.143Z] Copying: 96/512 [MB] (96 MBps) [2024-12-10T11:13:40.144Z] Copying: 193/512 [MB] (97 MBps) [2024-12-10T11:13:41.078Z] Copying: 298/512 [MB] (105 MBps) [2024-12-10T11:13:42.013Z] Copying: 403/512 [MB] (104 MBps) [2024-12-10T11:13:44.543Z] Copying: 512/512 [MB] (average 104 MBps) 00:10:37.717 00:10:37.717 11:13:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:10:37.717 11:13:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ kyaulonylc5efe20thu8t479lvd0a968ix1fre08vrqyp8rprvbvvpziekul2yojwbjcc04se6bkbf07ri5cbqszvgl0bsflb6s62xdc6erylb8j0dnfx08bhsee9kcmw7exyo09khujpqzj5g33b5f9vv7an76fq505pc71837klxua21msxko94jfpo7bseadosckaet95fdwyxizyrslhi114osm0hp3w9gvn9wd14vww4o8fgwwa687svx6l7o08koqdh1m34d2dmixre3z9y5mh1vh4pkr9gg0nu2d6fsz481f6ymqpmsh9uygcsvlc0b8klwmxw5y8czq9ldt6t6k3nvw7bm9lev492x81lavxckumf9sbsavexioeciaqtmfeuyzwlpg1gm8uxsz5nyvbgzbe8r1gfijvi8kltf4dlqdm882z3bs6k8v1810y5xnch1e6hjw00923ed7cvas14sg7tiukvido2j441cgtcq7fn1rj085emui66yb3x3yakznne5xd8snhdd76vs7nhxcx9xhwnc9jg5sn5t7gccneyy0996ek1kc708bojijayd4vfmj282mra32pwwpp7zd2ivyaw4opknwobw1dpa6xtx3hwo9tfgwa22mui68m9vnsxyzk29xwxi917xy21ogkf16hont805pcn2rgc20pnaotyo9r31755mwoqibgal19m2mnjlpb5z3uxojslqni9ka8m90o77c6rzb452st9mil0sjlxy6vp2ulpwia1gazf61ra5rvxo33q42ni7uw38oi4pdklvrc7vcpb4lafzcfezwvfnh8e0e7ashk6lkkj4q17by1zsmjrklx2198tk21wchi14ueezuwsg5mkjvbf393lh6tkpq0blmfngin60l5cw6312y23xrpdxslbt0nacdk6fmhw3rednydzizsthxk9x8rnj14mpmadke6xiqm01qcsn8qmmgugmj94hd99bfu414c4yvh5gzmov2dxrce9le8 == \k\y\a\u\l\o\n\y\l\c\5\e\f\e\2\0\t\h\u\8\t\4\7\9\l\v\d\0\a\9\6\8\i\x\1\f\r\e\0\8\v\r\q\y\p\8\r\p\r\v\b\v\v\p\z\i\e\k\u\l\2\y\o\j\w\b\j\c\c\0\4\s\e\6\b\k\b\f\0\7\r\i\5\c\b\q\s\z\v\g\l\0\b\s\f\l\b\6\s\6\2\x\d\c\6\e\r\y\l\b\8\j\0\d\n\f\x\0\8\b\h\s\e\e\9\k\c\m\w\7\e\x\y\o\0\9\k\h\u\j\p\q\z\j\5\g\3\3\b\5\f\9\v\v\7\a\n\7\6\f\q\5\0\5\p\c\7\1\8\3\7\k\l\x\u\a\2\1\m\s\x\k\o\9\4\j\f\p\o\7\b\s\e\a\d\o\s\c\k\a\e\t\9\5\f\d\w\y\x\i\z\y\r\s\l\h\i\1\1\4\o\s\m\0\h\p\3\w\9\g\v\n\9\w\d\1\4\v\w\w\4\o\8\f\g\w\w\a\6\8\7\s\v\x\6\l\7\o\0\8\k\o\q\d\h\1\m\3\4\d\2\d\m\i\x\r\e\3\z\9\y\5\m\h\1\v\h\4\p\k\r\9\g\g\0\n\u\2\d\6\f\s\z\4\8\1\f\6\y\m\q\p\m\s\h\9\u\y\g\c\s\v\l\c\0\b\8\k\l\w\m\x\w\5\y\8\c\z\q\9\l\d\t\6\t\6\k\3\n\v\w\7\b\m\9\l\e\v\4\9\2\x\8\1\l\a\v\x\c\k\u\m\f\9\s\b\s\a\v\e\x\i\o\e\c\i\a\q\t\m\f\e\u\y\z\w\l\p\g\1\g\m\8\u\x\s\z\5\n\y\v\b\g\z\b\e\8\r\1\g\f\i\j\v\i\8\k\l\t\f\4\d\l\q\d\m\8\8\2\z\3\b\s\6\k\8\v\1\8\1\0\y\5\x\n\c\h\1\e\6\h\j\w\0\0\9\2\3\e\d\7\c\v\a\s\1\4\s\g\7\t\i\u\k\v\i\d\o\2\j\4\4\1\c\g\t\c\q\7\f\n\1\r\j\0\8\5\e\m\u\i\6\6\y\b\3\x\3\y\a\k\z\n\n\e\5\x\d\8\s\n\h\d\d\7\6\v\s\7\n\h\x\c\x\9\x\h\w\n\c\9\j\g\5\s\n\5\t\7\g\c\c\n\e\y\y\0\9\9\6\e\k\1\k\c\7\0\8\b\o\j\i\j\a\y\d\4\v\f\m\j\2\8\2\m\r\a\3\2\p\w\w\p\p\7\z\d\2\i\v\y\a\w\4\o\p\k\n\w\o\b\w\1\d\p\a\6\x\t\x\3\h\w\o\9\t\f\g\w\a\2\2\m\u\i\6\8\m\9\v\n\s\x\y\z\k\2\9\x\w\x\i\9\1\7\x\y\2\1\o\g\k\f\1\6\h\o\n\t\8\0\5\p\c\n\2\r\g\c\2\0\p\n\a\o\t\y\o\9\r\3\1\7\5\5\m\w\o\q\i\b\g\a\l\1\9\m\2\m\n\j\l\p\b\5\z\3\u\x\o\j\s\l\q\n\i\9\k\a\8\m\9\0\o\7\7\c\6\r\z\b\4\5\2\s\t\9\m\i\l\0\s\j\l\x\y\6\v\p\2\u\l\p\w\i\a\1\g\a\z\f\6\1\r\a\5\r\v\x\o\3\3\q\4\2\n\i\7\u\w\3\8\o\i\4\p\d\k\l\v\r\c\7\v\c\p\b\4\l\a\f\z\c\f\e\z\w\v\f\n\h\8\e\0\e\7\a\s\h\k\6\l\k\k\j\4\q\1\7\b\y\1\z\s\m\j\r\k\l\x\2\1\9\8\t\k\2\1\w\c\h\i\1\4\u\e\e\z\u\w\s\g\5\m\k\j\v\b\f\3\9\3\l\h\6\t\k\p\q\0\b\l\m\f\n\g\i\n\6\0\l\5\c\w\6\3\1\2\y\2\3\x\r\p\d\x\s\l\b\t\0\n\a\c\d\k\6\f\m\h\w\3\r\e\d\n\y\d\z\i\z\s\t\h\x\k\9\x\8\r\n\j\1\4\m\p\m\a\d\k\e\6\x\i\q\m\0\1\q\c\s\n\8\q\m\m\g\u\g\m\j\9\4\h\d\9\9\b\f\u\4\1\4\c\4\y\v\h\5\g\z\m\o\v\2\d\x\r\c\e\9\l\e\8 ]] 00:10:37.717 11:13:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:10:37.717 11:13:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ kyaulonylc5efe20thu8t479lvd0a968ix1fre08vrqyp8rprvbvvpziekul2yojwbjcc04se6bkbf07ri5cbqszvgl0bsflb6s62xdc6erylb8j0dnfx08bhsee9kcmw7exyo09khujpqzj5g33b5f9vv7an76fq505pc71837klxua21msxko94jfpo7bseadosckaet95fdwyxizyrslhi114osm0hp3w9gvn9wd14vww4o8fgwwa687svx6l7o08koqdh1m34d2dmixre3z9y5mh1vh4pkr9gg0nu2d6fsz481f6ymqpmsh9uygcsvlc0b8klwmxw5y8czq9ldt6t6k3nvw7bm9lev492x81lavxckumf9sbsavexioeciaqtmfeuyzwlpg1gm8uxsz5nyvbgzbe8r1gfijvi8kltf4dlqdm882z3bs6k8v1810y5xnch1e6hjw00923ed7cvas14sg7tiukvido2j441cgtcq7fn1rj085emui66yb3x3yakznne5xd8snhdd76vs7nhxcx9xhwnc9jg5sn5t7gccneyy0996ek1kc708bojijayd4vfmj282mra32pwwpp7zd2ivyaw4opknwobw1dpa6xtx3hwo9tfgwa22mui68m9vnsxyzk29xwxi917xy21ogkf16hont805pcn2rgc20pnaotyo9r31755mwoqibgal19m2mnjlpb5z3uxojslqni9ka8m90o77c6rzb452st9mil0sjlxy6vp2ulpwia1gazf61ra5rvxo33q42ni7uw38oi4pdklvrc7vcpb4lafzcfezwvfnh8e0e7ashk6lkkj4q17by1zsmjrklx2198tk21wchi14ueezuwsg5mkjvbf393lh6tkpq0blmfngin60l5cw6312y23xrpdxslbt0nacdk6fmhw3rednydzizsthxk9x8rnj14mpmadke6xiqm01qcsn8qmmgugmj94hd99bfu414c4yvh5gzmov2dxrce9le8 == \k\y\a\u\l\o\n\y\l\c\5\e\f\e\2\0\t\h\u\8\t\4\7\9\l\v\d\0\a\9\6\8\i\x\1\f\r\e\0\8\v\r\q\y\p\8\r\p\r\v\b\v\v\p\z\i\e\k\u\l\2\y\o\j\w\b\j\c\c\0\4\s\e\6\b\k\b\f\0\7\r\i\5\c\b\q\s\z\v\g\l\0\b\s\f\l\b\6\s\6\2\x\d\c\6\e\r\y\l\b\8\j\0\d\n\f\x\0\8\b\h\s\e\e\9\k\c\m\w\7\e\x\y\o\0\9\k\h\u\j\p\q\z\j\5\g\3\3\b\5\f\9\v\v\7\a\n\7\6\f\q\5\0\5\p\c\7\1\8\3\7\k\l\x\u\a\2\1\m\s\x\k\o\9\4\j\f\p\o\7\b\s\e\a\d\o\s\c\k\a\e\t\9\5\f\d\w\y\x\i\z\y\r\s\l\h\i\1\1\4\o\s\m\0\h\p\3\w\9\g\v\n\9\w\d\1\4\v\w\w\4\o\8\f\g\w\w\a\6\8\7\s\v\x\6\l\7\o\0\8\k\o\q\d\h\1\m\3\4\d\2\d\m\i\x\r\e\3\z\9\y\5\m\h\1\v\h\4\p\k\r\9\g\g\0\n\u\2\d\6\f\s\z\4\8\1\f\6\y\m\q\p\m\s\h\9\u\y\g\c\s\v\l\c\0\b\8\k\l\w\m\x\w\5\y\8\c\z\q\9\l\d\t\6\t\6\k\3\n\v\w\7\b\m\9\l\e\v\4\9\2\x\8\1\l\a\v\x\c\k\u\m\f\9\s\b\s\a\v\e\x\i\o\e\c\i\a\q\t\m\f\e\u\y\z\w\l\p\g\1\g\m\8\u\x\s\z\5\n\y\v\b\g\z\b\e\8\r\1\g\f\i\j\v\i\8\k\l\t\f\4\d\l\q\d\m\8\8\2\z\3\b\s\6\k\8\v\1\8\1\0\y\5\x\n\c\h\1\e\6\h\j\w\0\0\9\2\3\e\d\7\c\v\a\s\1\4\s\g\7\t\i\u\k\v\i\d\o\2\j\4\4\1\c\g\t\c\q\7\f\n\1\r\j\0\8\5\e\m\u\i\6\6\y\b\3\x\3\y\a\k\z\n\n\e\5\x\d\8\s\n\h\d\d\7\6\v\s\7\n\h\x\c\x\9\x\h\w\n\c\9\j\g\5\s\n\5\t\7\g\c\c\n\e\y\y\0\9\9\6\e\k\1\k\c\7\0\8\b\o\j\i\j\a\y\d\4\v\f\m\j\2\8\2\m\r\a\3\2\p\w\w\p\p\7\z\d\2\i\v\y\a\w\4\o\p\k\n\w\o\b\w\1\d\p\a\6\x\t\x\3\h\w\o\9\t\f\g\w\a\2\2\m\u\i\6\8\m\9\v\n\s\x\y\z\k\2\9\x\w\x\i\9\1\7\x\y\2\1\o\g\k\f\1\6\h\o\n\t\8\0\5\p\c\n\2\r\g\c\2\0\p\n\a\o\t\y\o\9\r\3\1\7\5\5\m\w\o\q\i\b\g\a\l\1\9\m\2\m\n\j\l\p\b\5\z\3\u\x\o\j\s\l\q\n\i\9\k\a\8\m\9\0\o\7\7\c\6\r\z\b\4\5\2\s\t\9\m\i\l\0\s\j\l\x\y\6\v\p\2\u\l\p\w\i\a\1\g\a\z\f\6\1\r\a\5\r\v\x\o\3\3\q\4\2\n\i\7\u\w\3\8\o\i\4\p\d\k\l\v\r\c\7\v\c\p\b\4\l\a\f\z\c\f\e\z\w\v\f\n\h\8\e\0\e\7\a\s\h\k\6\l\k\k\j\4\q\1\7\b\y\1\z\s\m\j\r\k\l\x\2\1\9\8\t\k\2\1\w\c\h\i\1\4\u\e\e\z\u\w\s\g\5\m\k\j\v\b\f\3\9\3\l\h\6\t\k\p\q\0\b\l\m\f\n\g\i\n\6\0\l\5\c\w\6\3\1\2\y\2\3\x\r\p\d\x\s\l\b\t\0\n\a\c\d\k\6\f\m\h\w\3\r\e\d\n\y\d\z\i\z\s\t\h\x\k\9\x\8\r\n\j\1\4\m\p\m\a\d\k\e\6\x\i\q\m\0\1\q\c\s\n\8\q\m\m\g\u\g\m\j\9\4\h\d\9\9\b\f\u\4\1\4\c\4\y\v\h\5\g\z\m\o\v\2\d\x\r\c\e\9\l\e\8 ]] 00:10:37.717 11:13:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:10:37.976 11:13:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:10:37.976 11:13:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:10:37.976 11:13:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:37.976 11:13:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:38.234 { 00:10:38.234 "subsystems": [ 00:10:38.234 { 00:10:38.234 "subsystem": "bdev", 00:10:38.234 "config": [ 00:10:38.234 { 00:10:38.234 "params": { 00:10:38.234 "block_size": 512, 00:10:38.234 "num_blocks": 1048576, 00:10:38.234 "name": "malloc0" 00:10:38.234 }, 00:10:38.234 "method": "bdev_malloc_create" 00:10:38.234 }, 00:10:38.234 { 00:10:38.234 "params": { 00:10:38.234 "filename": "/dev/zram1", 00:10:38.234 "name": "uring0" 00:10:38.234 }, 00:10:38.234 "method": "bdev_uring_create" 00:10:38.234 }, 00:10:38.234 { 00:10:38.234 "method": "bdev_wait_for_examine" 00:10:38.234 } 00:10:38.234 ] 00:10:38.234 } 00:10:38.234 ] 00:10:38.234 } 00:10:38.234 [2024-12-10 11:13:44.884479] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:38.234 [2024-12-10 11:13:44.884636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63776 ] 00:10:38.234 [2024-12-10 11:13:45.058927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.491 [2024-12-10 11:13:45.197652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.750 [2024-12-10 11:13:45.385657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:40.650  [2024-12-10T11:13:48.042Z] Copying: 110/512 [MB] (110 MBps) [2024-12-10T11:13:49.416Z] Copying: 216/512 [MB] (105 MBps) [2024-12-10T11:13:50.350Z] Copying: 327/512 [MB] (111 MBps) [2024-12-10T11:13:50.915Z] Copying: 435/512 [MB] (107 MBps) [2024-12-10T11:13:53.444Z] Copying: 512/512 [MB] (average 108 MBps) 00:10:46.618 00:10:46.618 11:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:10:46.618 11:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:10:46.618 11:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:10:46.618 11:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:10:46.618 11:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:10:46.618 11:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:10:46.618 11:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:46.618 11:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:46.618 { 00:10:46.618 "subsystems": [ 00:10:46.618 { 00:10:46.618 "subsystem": "bdev", 00:10:46.618 "config": [ 00:10:46.618 { 00:10:46.618 "params": { 00:10:46.618 "block_size": 512, 00:10:46.618 "num_blocks": 1048576, 00:10:46.618 "name": "malloc0" 00:10:46.618 }, 00:10:46.618 "method": "bdev_malloc_create" 00:10:46.618 }, 00:10:46.618 { 00:10:46.618 "params": { 00:10:46.618 "filename": "/dev/zram1", 00:10:46.618 "name": "uring0" 00:10:46.618 }, 00:10:46.618 "method": "bdev_uring_create" 00:10:46.618 }, 00:10:46.618 { 00:10:46.618 "params": { 00:10:46.618 "name": "uring0" 00:10:46.618 }, 00:10:46.618 "method": "bdev_uring_delete" 00:10:46.618 }, 00:10:46.618 { 00:10:46.618 "method": "bdev_wait_for_examine" 00:10:46.618 } 00:10:46.618 ] 00:10:46.618 } 00:10:46.618 ] 00:10:46.618 } 00:10:46.618 [2024-12-10 11:13:53.293958] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:46.618 [2024-12-10 11:13:53.295045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63882 ] 00:10:46.876 [2024-12-10 11:13:53.492672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.876 [2024-12-10 11:13:53.601093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.134 [2024-12-10 11:13:53.785168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.700  [2024-12-10T11:13:57.058Z] Copying: 0/0 [B] (average 0 Bps) 00:10:50.232 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:50.232 11:13:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:50.559 { 00:10:50.559 "subsystems": [ 00:10:50.559 { 00:10:50.559 "subsystem": "bdev", 00:10:50.559 "config": [ 00:10:50.559 { 00:10:50.559 "params": { 00:10:50.559 "block_size": 512, 00:10:50.559 "num_blocks": 1048576, 00:10:50.559 "name": "malloc0" 00:10:50.559 }, 00:10:50.559 "method": "bdev_malloc_create" 00:10:50.559 }, 00:10:50.559 { 00:10:50.559 "params": { 00:10:50.559 "filename": "/dev/zram1", 00:10:50.559 "name": "uring0" 00:10:50.559 }, 00:10:50.559 "method": "bdev_uring_create" 00:10:50.559 }, 00:10:50.559 { 00:10:50.559 "params": { 00:10:50.559 "name": "uring0" 00:10:50.559 }, 00:10:50.559 "method": "bdev_uring_delete" 00:10:50.559 }, 00:10:50.559 { 00:10:50.559 "method": "bdev_wait_for_examine" 00:10:50.559 } 00:10:50.559 ] 00:10:50.559 } 00:10:50.559 ] 00:10:50.559 } 00:10:50.559 [2024-12-10 11:13:57.124389] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:50.559 [2024-12-10 11:13:57.124633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63940 ] 00:10:50.559 [2024-12-10 11:13:57.318230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.817 [2024-12-10 11:13:57.469385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.077 [2024-12-10 11:13:57.724716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:52.012 [2024-12-10 11:13:58.509574] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:10:52.012 [2024-12-10 11:13:58.509687] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:10:52.012 [2024-12-10 11:13:58.509714] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:10:52.012 [2024-12-10 11:13:58.509745] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:54.542 [2024-12-10 11:14:00.907959] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:54.542 11:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:10:54.542 11:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:54.542 11:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:10:54.542 11:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:10:54.542 11:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:10:54.542 11:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:54.542 11:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:10:54.542 11:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:10:54.543 11:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:10:54.543 11:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:10:54.543 11:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:10:54.543 11:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:10:54.801 00:10:54.801 real 0m36.571s 00:10:54.801 user 0m30.035s 00:10:54.801 sys 0m19.858s 00:10:54.801 11:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.801 11:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:54.801 ************************************ 00:10:54.801 END TEST dd_uring_copy 00:10:54.801 ************************************ 00:10:54.801 00:10:54.801 real 0m36.792s 00:10:54.801 user 0m30.164s 00:10:54.801 sys 0m19.956s 00:10:54.801 11:14:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.801 11:14:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:10:54.801 ************************************ 00:10:54.801 END TEST spdk_dd_uring 00:10:54.801 ************************************ 00:10:54.801 11:14:01 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:10:54.801 11:14:01 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:54.801 11:14:01 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.801 11:14:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:54.801 ************************************ 00:10:54.801 START TEST spdk_dd_sparse 00:10:54.801 ************************************ 00:10:54.801 11:14:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:10:55.059 * Looking for test storage... 00:10:55.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:55.059 11:14:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:55.059 11:14:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:10:55.059 11:14:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:55.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.060 --rc genhtml_branch_coverage=1 00:10:55.060 --rc genhtml_function_coverage=1 00:10:55.060 --rc genhtml_legend=1 00:10:55.060 --rc geninfo_all_blocks=1 00:10:55.060 --rc geninfo_unexecuted_blocks=1 00:10:55.060 00:10:55.060 ' 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:55.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.060 --rc genhtml_branch_coverage=1 00:10:55.060 --rc genhtml_function_coverage=1 00:10:55.060 --rc genhtml_legend=1 00:10:55.060 --rc geninfo_all_blocks=1 00:10:55.060 --rc geninfo_unexecuted_blocks=1 00:10:55.060 00:10:55.060 ' 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:55.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.060 --rc genhtml_branch_coverage=1 00:10:55.060 --rc genhtml_function_coverage=1 00:10:55.060 --rc genhtml_legend=1 00:10:55.060 --rc geninfo_all_blocks=1 00:10:55.060 --rc geninfo_unexecuted_blocks=1 00:10:55.060 00:10:55.060 ' 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:55.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.060 --rc genhtml_branch_coverage=1 00:10:55.060 --rc genhtml_function_coverage=1 00:10:55.060 --rc genhtml_legend=1 00:10:55.060 --rc geninfo_all_blocks=1 00:10:55.060 --rc geninfo_unexecuted_blocks=1 00:10:55.060 00:10:55.060 ' 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:10:55.060 1+0 records in 00:10:55.060 1+0 records out 00:10:55.060 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00538719 s, 779 MB/s 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:10:55.060 1+0 records in 00:10:55.060 1+0 records out 00:10:55.060 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00570691 s, 735 MB/s 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:10:55.060 1+0 records in 00:10:55.060 1+0 records out 00:10:55.060 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00549442 s, 763 MB/s 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:55.060 ************************************ 00:10:55.060 START TEST dd_sparse_file_to_file 00:10:55.060 ************************************ 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:10:55.060 11:14:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:55.060 { 00:10:55.060 "subsystems": [ 00:10:55.060 { 00:10:55.060 "subsystem": "bdev", 00:10:55.060 "config": [ 00:10:55.060 { 00:10:55.060 "params": { 00:10:55.060 "block_size": 4096, 00:10:55.060 "filename": "dd_sparse_aio_disk", 00:10:55.060 "name": "dd_aio" 00:10:55.060 }, 00:10:55.060 "method": "bdev_aio_create" 00:10:55.060 }, 00:10:55.060 { 00:10:55.060 "params": { 00:10:55.060 "lvs_name": "dd_lvstore", 00:10:55.060 "bdev_name": "dd_aio" 00:10:55.060 }, 00:10:55.060 "method": "bdev_lvol_create_lvstore" 00:10:55.060 }, 00:10:55.060 { 00:10:55.060 "method": "bdev_wait_for_examine" 00:10:55.060 } 00:10:55.060 ] 00:10:55.060 } 00:10:55.060 ] 00:10:55.060 } 00:10:55.060 [2024-12-10 11:14:01.880009] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:55.061 [2024-12-10 11:14:01.880167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64074 ] 00:10:55.318 [2024-12-10 11:14:02.093656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.577 [2024-12-10 11:14:02.233175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.835 [2024-12-10 11:14:02.417022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:55.835  [2024-12-10T11:14:04.037Z] Copying: 12/36 [MB] (average 1000 MBps) 00:10:57.211 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:10:57.211 00:10:57.211 real 0m2.068s 00:10:57.211 user 0m1.740s 00:10:57.211 sys 0m1.048s 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:57.211 ************************************ 00:10:57.211 END TEST dd_sparse_file_to_file 00:10:57.211 ************************************ 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:57.211 ************************************ 00:10:57.211 START TEST dd_sparse_file_to_bdev 00:10:57.211 ************************************ 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:57.211 11:14:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:57.211 { 00:10:57.211 "subsystems": [ 00:10:57.211 { 00:10:57.211 "subsystem": "bdev", 00:10:57.211 "config": [ 00:10:57.211 { 00:10:57.211 "params": { 00:10:57.211 "block_size": 4096, 00:10:57.211 "filename": "dd_sparse_aio_disk", 00:10:57.211 "name": "dd_aio" 00:10:57.211 }, 00:10:57.211 "method": "bdev_aio_create" 00:10:57.211 }, 00:10:57.211 { 00:10:57.211 "params": { 00:10:57.211 "lvs_name": "dd_lvstore", 00:10:57.211 "lvol_name": "dd_lvol", 00:10:57.211 "size_in_mib": 36, 00:10:57.211 "thin_provision": true 00:10:57.211 }, 00:10:57.211 "method": "bdev_lvol_create" 00:10:57.211 }, 00:10:57.211 { 00:10:57.211 "method": "bdev_wait_for_examine" 00:10:57.211 } 00:10:57.211 ] 00:10:57.211 } 00:10:57.211 ] 00:10:57.211 } 00:10:57.211 [2024-12-10 11:14:04.021209] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:57.211 [2024-12-10 11:14:04.021381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64134 ] 00:10:57.469 [2024-12-10 11:14:04.201478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.728 [2024-12-10 11:14:04.327267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.728 [2024-12-10 11:14:04.520783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.986  [2024-12-10T11:14:06.190Z] Copying: 12/36 [MB] (average 666 MBps) 00:10:59.364 00:10:59.364 ************************************ 00:10:59.364 END TEST dd_sparse_file_to_bdev 00:10:59.364 ************************************ 00:10:59.364 00:10:59.364 real 0m1.992s 00:10:59.364 user 0m1.688s 00:10:59.364 sys 0m1.074s 00:10:59.364 11:14:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.364 11:14:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:59.364 11:14:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:10:59.364 11:14:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:59.364 11:14:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.364 11:14:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:59.364 ************************************ 00:10:59.364 START TEST dd_sparse_bdev_to_file 00:10:59.364 ************************************ 00:10:59.364 11:14:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:10:59.364 11:14:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:10:59.364 11:14:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:10:59.364 11:14:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:59.364 11:14:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:10:59.364 11:14:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:10:59.364 11:14:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:10:59.364 11:14:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:10:59.364 11:14:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:59.364 { 00:10:59.364 "subsystems": [ 00:10:59.364 { 00:10:59.364 "subsystem": "bdev", 00:10:59.364 "config": [ 00:10:59.364 { 00:10:59.364 "params": { 00:10:59.364 "block_size": 4096, 00:10:59.364 "filename": "dd_sparse_aio_disk", 00:10:59.364 "name": "dd_aio" 00:10:59.364 }, 00:10:59.364 "method": "bdev_aio_create" 00:10:59.364 }, 00:10:59.364 { 00:10:59.364 "method": "bdev_wait_for_examine" 00:10:59.364 } 00:10:59.364 ] 00:10:59.364 } 00:10:59.364 ] 00:10:59.364 } 00:10:59.364 [2024-12-10 11:14:06.032043] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:10:59.364 [2024-12-10 11:14:06.032463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64185 ] 00:10:59.623 [2024-12-10 11:14:06.212592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.623 [2024-12-10 11:14:06.316877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.881 [2024-12-10 11:14:06.502044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:59.881  [2024-12-10T11:14:08.083Z] Copying: 12/36 [MB] (average 1200 MBps) 00:11:01.257 00:11:01.257 11:14:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:11:01.257 11:14:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:11:01.257 11:14:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:11:01.257 11:14:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:11:01.257 11:14:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:11:01.257 11:14:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:11:01.257 11:14:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:11:01.257 11:14:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:11:01.257 11:14:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:11:01.257 11:14:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:11:01.257 00:11:01.257 real 0m1.833s 00:11:01.257 user 0m1.521s 00:11:01.257 sys 0m1.006s 00:11:01.257 11:14:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.257 ************************************ 00:11:01.257 END TEST dd_sparse_bdev_to_file 00:11:01.257 ************************************ 00:11:01.257 11:14:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:01.257 11:14:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:11:01.257 11:14:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:11:01.258 11:14:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:11:01.258 11:14:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:11:01.258 11:14:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:11:01.258 ************************************ 00:11:01.258 END TEST spdk_dd_sparse 00:11:01.258 ************************************ 00:11:01.258 00:11:01.258 real 0m6.260s 00:11:01.258 user 0m5.113s 00:11:01.258 sys 0m3.326s 00:11:01.258 11:14:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.258 11:14:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:01.258 11:14:07 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:01.258 11:14:07 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:01.258 11:14:07 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.258 11:14:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:01.258 ************************************ 00:11:01.258 START TEST spdk_dd_negative 00:11:01.258 ************************************ 00:11:01.258 11:14:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:01.258 * Looking for test storage... 00:11:01.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:01.258 11:14:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:01.258 11:14:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:11:01.258 11:14:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:01.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.258 --rc genhtml_branch_coverage=1 00:11:01.258 --rc genhtml_function_coverage=1 00:11:01.258 --rc genhtml_legend=1 00:11:01.258 --rc geninfo_all_blocks=1 00:11:01.258 --rc geninfo_unexecuted_blocks=1 00:11:01.258 00:11:01.258 ' 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:01.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.258 --rc genhtml_branch_coverage=1 00:11:01.258 --rc genhtml_function_coverage=1 00:11:01.258 --rc genhtml_legend=1 00:11:01.258 --rc geninfo_all_blocks=1 00:11:01.258 --rc geninfo_unexecuted_blocks=1 00:11:01.258 00:11:01.258 ' 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:01.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.258 --rc genhtml_branch_coverage=1 00:11:01.258 --rc genhtml_function_coverage=1 00:11:01.258 --rc genhtml_legend=1 00:11:01.258 --rc geninfo_all_blocks=1 00:11:01.258 --rc geninfo_unexecuted_blocks=1 00:11:01.258 00:11:01.258 ' 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:01.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.258 --rc genhtml_branch_coverage=1 00:11:01.258 --rc genhtml_function_coverage=1 00:11:01.258 --rc genhtml_legend=1 00:11:01.258 --rc geninfo_all_blocks=1 00:11:01.258 --rc geninfo_unexecuted_blocks=1 00:11:01.258 00:11:01.258 ' 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:01.258 ************************************ 00:11:01.258 START TEST dd_invalid_arguments 00:11:01.258 ************************************ 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:01.258 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:01.518 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:11:01.518 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:11:01.518 00:11:01.518 CPU options: 00:11:01.518 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:11:01.518 (like [0,1,10]) 00:11:01.518 --lcores lcore to CPU mapping list. The list is in the format: 00:11:01.518 [<,lcores[@CPUs]>...] 00:11:01.518 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:11:01.518 Within the group, '-' is used for range separator, 00:11:01.518 ',' is used for single number separator. 00:11:01.518 '( )' can be omitted for single element group, 00:11:01.518 '@' can be omitted if cpus and lcores have the same value 00:11:01.518 --disable-cpumask-locks Disable CPU core lock files. 00:11:01.518 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:11:01.518 pollers in the app support interrupt mode) 00:11:01.518 -p, --main-core main (primary) core for DPDK 00:11:01.518 00:11:01.518 Configuration options: 00:11:01.518 -c, --config, --json JSON config file 00:11:01.518 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:11:01.518 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:11:01.518 --wait-for-rpc wait for RPCs to initialize subsystems 00:11:01.518 --rpcs-allowed comma-separated list of permitted RPCS 00:11:01.518 --json-ignore-init-errors don't exit on invalid config entry 00:11:01.518 00:11:01.518 Memory options: 00:11:01.518 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:11:01.518 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:11:01.518 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:11:01.518 -R, --huge-unlink unlink huge files after initialization 00:11:01.518 -n, --mem-channels number of memory channels used for DPDK 00:11:01.518 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:11:01.518 --msg-mempool-size global message memory pool size in count (default: 262143) 00:11:01.518 --no-huge run without using hugepages 00:11:01.518 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:11:01.518 -i, --shm-id shared memory ID (optional) 00:11:01.518 -g, --single-file-segments force creating just one hugetlbfs file 00:11:01.518 00:11:01.518 PCI options: 00:11:01.518 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:11:01.518 -B, --pci-blocked pci addr to block (can be used more than once) 00:11:01.518 -u, --no-pci disable PCI access 00:11:01.518 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:11:01.518 00:11:01.518 Log options: 00:11:01.518 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:11:01.518 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:11:01.518 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:11:01.518 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:11:01.518 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:11:01.518 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:11:01.518 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:11:01.518 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:11:01.518 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:11:01.518 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:11:01.518 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:11:01.518 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:11:01.518 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:11:01.518 --silence-noticelog disable notice level logging to stderr 00:11:01.518 00:11:01.518 Trace options: 00:11:01.518 --num-trace-entries number of trace entries for each core, must be power of 2, 00:11:01.518 [2024-12-10 11:14:08.168158] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:11:01.518 setting 0 to disable trace (default 32768) 00:11:01.518 Tracepoints vary in size and can use more than one trace entry. 00:11:01.518 -e, --tpoint-group [:] 00:11:01.518 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:11:01.518 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:11:01.518 blob, bdev_raid, scheduler, all). 00:11:01.518 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:11:01.518 a tracepoint group. First tpoint inside a group can be enabled by 00:11:01.518 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:11:01.518 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:11:01.518 in /include/spdk_internal/trace_defs.h 00:11:01.518 00:11:01.518 Other options: 00:11:01.518 -h, --help show this usage 00:11:01.518 -v, --version print SPDK version 00:11:01.518 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:11:01.518 --env-context Opaque context for use of the env implementation 00:11:01.518 00:11:01.518 Application specific: 00:11:01.518 [--------- DD Options ---------] 00:11:01.518 --if Input file. Must specify either --if or --ib. 00:11:01.518 --ib Input bdev. Must specifier either --if or --ib 00:11:01.518 --of Output file. Must specify either --of or --ob. 00:11:01.518 --ob Output bdev. Must specify either --of or --ob. 00:11:01.518 --iflag Input file flags. 00:11:01.518 --oflag Output file flags. 00:11:01.518 --bs I/O unit size (default: 4096) 00:11:01.518 --qd Queue depth (default: 2) 00:11:01.518 --count I/O unit count. The number of I/O units to copy. (default: all) 00:11:01.518 --skip Skip this many I/O units at start of input. (default: 0) 00:11:01.518 --seek Skip this many I/O units at start of output. (default: 0) 00:11:01.518 --aio Force usage of AIO. (by default io_uring is used if available) 00:11:01.518 --sparse Enable hole skipping in input target 00:11:01.518 Available iflag and oflag values: 00:11:01.518 append - append mode 00:11:01.518 direct - use direct I/O for data 00:11:01.518 directory - fail unless a directory 00:11:01.518 dsync - use synchronized I/O for data 00:11:01.518 noatime - do not update access time 00:11:01.518 noctty - do not assign controlling terminal from file 00:11:01.518 nofollow - do not follow symlinks 00:11:01.518 nonblock - use non-blocking I/O 00:11:01.518 sync - use synchronized I/O for data and metadata 00:11:01.518 ************************************ 00:11:01.518 END TEST dd_invalid_arguments 00:11:01.518 ************************************ 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:01.518 00:11:01.518 real 0m0.148s 00:11:01.518 user 0m0.083s 00:11:01.518 sys 0m0.063s 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:01.518 ************************************ 00:11:01.518 START TEST dd_double_input 00:11:01.518 ************************************ 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.518 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.519 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.519 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:01.519 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:01.778 [2024-12-10 11:14:08.384544] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:01.778 00:11:01.778 real 0m0.173s 00:11:01.778 user 0m0.095s 00:11:01.778 sys 0m0.075s 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.778 ************************************ 00:11:01.778 END TEST dd_double_input 00:11:01.778 ************************************ 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:01.778 ************************************ 00:11:01.778 START TEST dd_double_output 00:11:01.778 ************************************ 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:01.778 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:01.778 [2024-12-10 11:14:08.596608] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:02.036 00:11:02.036 real 0m0.190s 00:11:02.036 user 0m0.099s 00:11:02.036 sys 0m0.084s 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.036 ************************************ 00:11:02.036 END TEST dd_double_output 00:11:02.036 ************************************ 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:02.036 ************************************ 00:11:02.036 START TEST dd_no_input 00:11:02.036 ************************************ 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:02.036 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:02.036 [2024-12-10 11:14:08.820220] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:11:02.294 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:11:02.294 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:02.294 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:02.294 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:02.294 00:11:02.294 real 0m0.169s 00:11:02.294 user 0m0.095s 00:11:02.294 sys 0m0.071s 00:11:02.294 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:11:02.295 ************************************ 00:11:02.295 END TEST dd_no_input 00:11:02.295 ************************************ 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:02.295 ************************************ 00:11:02.295 START TEST dd_no_output 00:11:02.295 ************************************ 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:02.295 11:14:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:02.295 [2024-12-10 11:14:09.061704] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:02.554 ************************************ 00:11:02.554 END TEST dd_no_output 00:11:02.554 ************************************ 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:02.554 00:11:02.554 real 0m0.201s 00:11:02.554 user 0m0.110s 00:11:02.554 sys 0m0.087s 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:02.554 ************************************ 00:11:02.554 START TEST dd_wrong_blocksize 00:11:02.554 ************************************ 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:02.554 [2024-12-10 11:14:09.289723] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:02.554 00:11:02.554 real 0m0.175s 00:11:02.554 user 0m0.094s 00:11:02.554 sys 0m0.079s 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.554 ************************************ 00:11:02.554 END TEST dd_wrong_blocksize 00:11:02.554 ************************************ 00:11:02.554 11:14:09 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:02.813 ************************************ 00:11:02.813 START TEST dd_smaller_blocksize 00:11:02.813 ************************************ 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:02.813 11:14:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:02.813 [2024-12-10 11:14:09.491142] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:02.813 [2024-12-10 11:14:09.491545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64435 ] 00:11:03.071 [2024-12-10 11:14:09.665303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.071 [2024-12-10 11:14:09.780773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.330 [2024-12-10 11:14:09.990161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:03.588 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:11:04.155 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:11:04.155 [2024-12-10 11:14:10.787522] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:11:04.155 [2024-12-10 11:14:10.787678] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:04.722 [2024-12-10 11:14:11.540019] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:05.288 00:11:05.288 real 0m2.419s 00:11:05.288 user 0m1.585s 00:11:05.288 sys 0m0.716s 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.288 ************************************ 00:11:05.288 END TEST dd_smaller_blocksize 00:11:05.288 ************************************ 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:05.288 ************************************ 00:11:05.288 START TEST dd_invalid_count 00:11:05.288 ************************************ 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:05.288 11:14:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:05.288 [2024-12-10 11:14:11.987936] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:05.289 00:11:05.289 real 0m0.192s 00:11:05.289 user 0m0.092s 00:11:05.289 sys 0m0.097s 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:11:05.289 ************************************ 00:11:05.289 END TEST dd_invalid_count 00:11:05.289 ************************************ 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:05.289 ************************************ 00:11:05.289 START TEST dd_invalid_oflag 00:11:05.289 ************************************ 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:05.289 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:05.547 [2024-12-10 11:14:12.192929] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:05.547 00:11:05.547 real 0m0.152s 00:11:05.547 user 0m0.084s 00:11:05.547 sys 0m0.066s 00:11:05.547 ************************************ 00:11:05.547 END TEST dd_invalid_oflag 00:11:05.547 ************************************ 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:05.547 ************************************ 00:11:05.547 START TEST dd_invalid_iflag 00:11:05.547 ************************************ 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:05.547 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:05.806 [2024-12-10 11:14:12.398684] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:05.806 00:11:05.806 real 0m0.166s 00:11:05.806 user 0m0.095s 00:11:05.806 sys 0m0.069s 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:11:05.806 ************************************ 00:11:05.806 END TEST dd_invalid_iflag 00:11:05.806 ************************************ 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:05.806 ************************************ 00:11:05.806 START TEST dd_unknown_flag 00:11:05.806 ************************************ 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:05.806 11:14:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:05.806 [2024-12-10 11:14:12.601678] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:05.806 [2024-12-10 11:14:12.601851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64553 ] 00:11:06.064 [2024-12-10 11:14:12.784459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.323 [2024-12-10 11:14:12.912943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.323 [2024-12-10 11:14:13.111537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:06.587 [2024-12-10 11:14:13.218424] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:11:06.587 [2024-12-10 11:14:13.218544] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:06.587 [2024-12-10 11:14:13.218669] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:11:06.587 [2024-12-10 11:14:13.218711] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:06.587 [2024-12-10 11:14:13.219061] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:11:06.587 [2024-12-10 11:14:13.219095] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:06.587 [2024-12-10 11:14:13.219177] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:11:06.587 [2024-12-10 11:14:13.219199] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:11:07.155 [2024-12-10 11:14:13.967513] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:07.414 11:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:11:07.414 11:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:07.414 11:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:11:07.414 11:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:11:07.414 11:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:11:07.414 11:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:07.414 00:11:07.414 real 0m1.738s 00:11:07.414 user 0m1.421s 00:11:07.414 sys 0m0.206s 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:11:07.673 ************************************ 00:11:07.673 END TEST dd_unknown_flag 00:11:07.673 ************************************ 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:07.673 ************************************ 00:11:07.673 START TEST dd_invalid_json 00:11:07.673 ************************************ 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:07.673 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:07.673 [2024-12-10 11:14:14.375267] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:07.673 [2024-12-10 11:14:14.375475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64599 ] 00:11:07.932 [2024-12-10 11:14:14.547040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.932 [2024-12-10 11:14:14.661398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.932 [2024-12-10 11:14:14.661511] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:11:07.932 [2024-12-10 11:14:14.661536] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:07.932 [2024-12-10 11:14:14.661552] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:07.932 [2024-12-10 11:14:14.661626] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:08.191 00:11:08.191 real 0m0.655s 00:11:08.191 user 0m0.430s 00:11:08.191 sys 0m0.120s 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:11:08.191 ************************************ 00:11:08.191 END TEST dd_invalid_json 00:11:08.191 ************************************ 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:08.191 ************************************ 00:11:08.191 START TEST dd_invalid_seek 00:11:08.191 ************************************ 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:08.191 11:14:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:11:08.448 { 00:11:08.448 "subsystems": [ 00:11:08.448 { 00:11:08.448 "subsystem": "bdev", 00:11:08.448 "config": [ 00:11:08.448 { 00:11:08.448 "params": { 00:11:08.448 "block_size": 512, 00:11:08.448 "num_blocks": 512, 00:11:08.448 "name": "malloc0" 00:11:08.448 }, 00:11:08.448 "method": "bdev_malloc_create" 00:11:08.448 }, 00:11:08.448 { 00:11:08.448 "params": { 00:11:08.448 "block_size": 512, 00:11:08.448 "num_blocks": 512, 00:11:08.448 "name": "malloc1" 00:11:08.448 }, 00:11:08.448 "method": "bdev_malloc_create" 00:11:08.448 }, 00:11:08.448 { 00:11:08.448 "method": "bdev_wait_for_examine" 00:11:08.448 } 00:11:08.448 ] 00:11:08.448 } 00:11:08.448 ] 00:11:08.448 } 00:11:08.448 [2024-12-10 11:14:15.097743] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:08.448 [2024-12-10 11:14:15.097899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64630 ] 00:11:08.448 [2024-12-10 11:14:15.271730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.705 [2024-12-10 11:14:15.394518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.964 [2024-12-10 11:14:15.578252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.964 [2024-12-10 11:14:15.714508] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:11:08.964 [2024-12-10 11:14:15.714639] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:09.899 [2024-12-10 11:14:16.472792] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:10.158 00:11:10.158 real 0m1.749s 00:11:10.158 user 0m1.476s 00:11:10.158 sys 0m0.215s 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:11:10.158 ************************************ 00:11:10.158 END TEST dd_invalid_seek 00:11:10.158 ************************************ 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:10.158 ************************************ 00:11:10.158 START TEST dd_invalid_skip 00:11:10.158 ************************************ 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:10.158 11:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:11:10.158 { 00:11:10.158 "subsystems": [ 00:11:10.158 { 00:11:10.158 "subsystem": "bdev", 00:11:10.158 "config": [ 00:11:10.158 { 00:11:10.158 "params": { 00:11:10.158 "block_size": 512, 00:11:10.158 "num_blocks": 512, 00:11:10.158 "name": "malloc0" 00:11:10.158 }, 00:11:10.158 "method": "bdev_malloc_create" 00:11:10.158 }, 00:11:10.158 { 00:11:10.158 "params": { 00:11:10.158 "block_size": 512, 00:11:10.158 "num_blocks": 512, 00:11:10.158 "name": "malloc1" 00:11:10.158 }, 00:11:10.158 "method": "bdev_malloc_create" 00:11:10.158 }, 00:11:10.158 { 00:11:10.158 "method": "bdev_wait_for_examine" 00:11:10.158 } 00:11:10.158 ] 00:11:10.158 } 00:11:10.158 ] 00:11:10.158 } 00:11:10.158 [2024-12-10 11:14:16.878251] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:10.158 [2024-12-10 11:14:16.878418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64681 ] 00:11:10.424 [2024-12-10 11:14:17.052787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.424 [2024-12-10 11:14:17.156819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.717 [2024-12-10 11:14:17.339870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:10.717 [2024-12-10 11:14:17.478040] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:11:10.717 [2024-12-10 11:14:17.478129] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:11.652 [2024-12-10 11:14:18.236602] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:11.911 00:11:11.911 real 0m1.708s 00:11:11.911 user 0m1.464s 00:11:11.911 sys 0m0.207s 00:11:11.911 ************************************ 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:11:11.911 END TEST dd_invalid_skip 00:11:11.911 ************************************ 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:11.911 ************************************ 00:11:11.911 START TEST dd_invalid_input_count 00:11:11.911 ************************************ 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:11.911 11:14:18 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:11:11.911 { 00:11:11.911 "subsystems": [ 00:11:11.911 { 00:11:11.911 "subsystem": "bdev", 00:11:11.911 "config": [ 00:11:11.911 { 00:11:11.911 "params": { 00:11:11.911 "block_size": 512, 00:11:11.911 "num_blocks": 512, 00:11:11.911 "name": "malloc0" 00:11:11.911 }, 00:11:11.912 "method": "bdev_malloc_create" 00:11:11.912 }, 00:11:11.912 { 00:11:11.912 "params": { 00:11:11.912 "block_size": 512, 00:11:11.912 "num_blocks": 512, 00:11:11.912 "name": "malloc1" 00:11:11.912 }, 00:11:11.912 "method": "bdev_malloc_create" 00:11:11.912 }, 00:11:11.912 { 00:11:11.912 "method": "bdev_wait_for_examine" 00:11:11.912 } 00:11:11.912 ] 00:11:11.912 } 00:11:11.912 ] 00:11:11.912 } 00:11:11.912 [2024-12-10 11:14:18.668173] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:11.912 [2024-12-10 11:14:18.668342] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64725 ] 00:11:12.170 [2024-12-10 11:14:18.841164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.170 [2024-12-10 11:14:18.962199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.429 [2024-12-10 11:14:19.145469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:12.687 [2024-12-10 11:14:19.281456] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:11:12.687 [2024-12-10 11:14:19.281569] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:13.254 [2024-12-10 11:14:20.034658] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:13.513 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:11:13.513 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:13.513 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:11:13.513 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:11:13.513 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:11:13.513 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:13.513 00:11:13.513 real 0m1.761s 00:11:13.513 user 0m1.487s 00:11:13.513 sys 0m0.215s 00:11:13.513 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.513 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:11:13.513 ************************************ 00:11:13.513 END TEST dd_invalid_input_count 00:11:13.513 ************************************ 00:11:13.513 11:14:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:11:13.513 11:14:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:13.513 11:14:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.513 11:14:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:13.771 ************************************ 00:11:13.771 START TEST dd_invalid_output_count 00:11:13.771 ************************************ 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:13.771 11:14:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:11:13.771 { 00:11:13.771 "subsystems": [ 00:11:13.771 { 00:11:13.771 "subsystem": "bdev", 00:11:13.771 "config": [ 00:11:13.771 { 00:11:13.771 "params": { 00:11:13.772 "block_size": 512, 00:11:13.772 "num_blocks": 512, 00:11:13.772 "name": "malloc0" 00:11:13.772 }, 00:11:13.772 "method": "bdev_malloc_create" 00:11:13.772 }, 00:11:13.772 { 00:11:13.772 "method": "bdev_wait_for_examine" 00:11:13.772 } 00:11:13.772 ] 00:11:13.772 } 00:11:13.772 ] 00:11:13.772 } 00:11:13.772 [2024-12-10 11:14:20.460662] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:13.772 [2024-12-10 11:14:20.460819] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64772 ] 00:11:14.030 [2024-12-10 11:14:20.633192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.030 [2024-12-10 11:14:20.740815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.288 [2024-12-10 11:14:20.923560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:14.288 [2024-12-10 11:14:21.050052] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:11:14.288 [2024-12-10 11:14:21.050145] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:15.224 [2024-12-10 11:14:21.818395] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:15.483 00:11:15.483 real 0m1.758s 00:11:15.483 user 0m1.500s 00:11:15.483 sys 0m0.208s 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:11:15.483 ************************************ 00:11:15.483 END TEST dd_invalid_output_count 00:11:15.483 ************************************ 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:15.483 ************************************ 00:11:15.483 START TEST dd_bs_not_multiple 00:11:15.483 ************************************ 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:15.483 11:14:22 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:11:15.483 { 00:11:15.483 "subsystems": [ 00:11:15.483 { 00:11:15.483 "subsystem": "bdev", 00:11:15.483 "config": [ 00:11:15.483 { 00:11:15.483 "params": { 00:11:15.483 "block_size": 512, 00:11:15.483 "num_blocks": 512, 00:11:15.483 "name": "malloc0" 00:11:15.483 }, 00:11:15.483 "method": "bdev_malloc_create" 00:11:15.483 }, 00:11:15.483 { 00:11:15.483 "params": { 00:11:15.483 "block_size": 512, 00:11:15.483 "num_blocks": 512, 00:11:15.483 "name": "malloc1" 00:11:15.483 }, 00:11:15.483 "method": "bdev_malloc_create" 00:11:15.483 }, 00:11:15.483 { 00:11:15.483 "method": "bdev_wait_for_examine" 00:11:15.483 } 00:11:15.483 ] 00:11:15.483 } 00:11:15.483 ] 00:11:15.483 } 00:11:15.483 [2024-12-10 11:14:22.258707] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:15.483 [2024-12-10 11:14:22.258877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64821 ] 00:11:15.742 [2024-12-10 11:14:22.432133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.742 [2024-12-10 11:14:22.540794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.001 [2024-12-10 11:14:22.741520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:16.259 [2024-12-10 11:14:22.882396] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:11:16.259 [2024-12-10 11:14:22.882476] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:17.195 [2024-12-10 11:14:23.661016] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:17.195 11:14:23 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:11:17.195 11:14:23 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:17.195 11:14:23 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:11:17.195 11:14:23 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:11:17.195 11:14:23 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:11:17.195 11:14:23 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:17.195 00:11:17.195 real 0m1.783s 00:11:17.195 user 0m1.520s 00:11:17.195 sys 0m0.210s 00:11:17.195 11:14:23 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.195 11:14:23 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:11:17.195 ************************************ 00:11:17.195 END TEST dd_bs_not_multiple 00:11:17.195 ************************************ 00:11:17.195 00:11:17.195 real 0m16.107s 00:11:17.195 user 0m12.101s 00:11:17.195 sys 0m3.325s 00:11:17.195 11:14:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.195 11:14:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:17.195 ************************************ 00:11:17.195 END TEST spdk_dd_negative 00:11:17.195 ************************************ 00:11:17.195 00:11:17.195 real 3m23.456s 00:11:17.195 user 2m46.864s 00:11:17.195 sys 1m11.966s 00:11:17.195 11:14:24 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.195 11:14:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:17.195 ************************************ 00:11:17.195 END TEST spdk_dd 00:11:17.195 ************************************ 00:11:17.454 11:14:24 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:11:17.454 11:14:24 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:11:17.454 11:14:24 -- spdk/autotest.sh@260 -- # timing_exit lib 00:11:17.454 11:14:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:17.454 11:14:24 -- common/autotest_common.sh@10 -- # set +x 00:11:17.454 11:14:24 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:11:17.454 11:14:24 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:11:17.454 11:14:24 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:11:17.454 11:14:24 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:11:17.454 11:14:24 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:11:17.454 11:14:24 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:11:17.454 11:14:24 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:17.454 11:14:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:17.454 11:14:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.454 11:14:24 -- common/autotest_common.sh@10 -- # set +x 00:11:17.454 ************************************ 00:11:17.454 START TEST nvmf_tcp 00:11:17.454 ************************************ 00:11:17.454 11:14:24 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:17.454 * Looking for test storage... 00:11:17.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:17.454 11:14:24 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:17.454 11:14:24 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:11:17.454 11:14:24 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:17.454 11:14:24 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:17.454 11:14:24 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.454 11:14:24 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.713 11:14:24 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:11:17.713 11:14:24 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.713 11:14:24 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:17.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.713 --rc genhtml_branch_coverage=1 00:11:17.713 --rc genhtml_function_coverage=1 00:11:17.713 --rc genhtml_legend=1 00:11:17.713 --rc geninfo_all_blocks=1 00:11:17.713 --rc geninfo_unexecuted_blocks=1 00:11:17.713 00:11:17.713 ' 00:11:17.713 11:14:24 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:17.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.713 --rc genhtml_branch_coverage=1 00:11:17.713 --rc genhtml_function_coverage=1 00:11:17.713 --rc genhtml_legend=1 00:11:17.713 --rc geninfo_all_blocks=1 00:11:17.713 --rc geninfo_unexecuted_blocks=1 00:11:17.713 00:11:17.713 ' 00:11:17.713 11:14:24 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:17.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.713 --rc genhtml_branch_coverage=1 00:11:17.713 --rc genhtml_function_coverage=1 00:11:17.713 --rc genhtml_legend=1 00:11:17.713 --rc geninfo_all_blocks=1 00:11:17.713 --rc geninfo_unexecuted_blocks=1 00:11:17.713 00:11:17.713 ' 00:11:17.713 11:14:24 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:17.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.713 --rc genhtml_branch_coverage=1 00:11:17.713 --rc genhtml_function_coverage=1 00:11:17.713 --rc genhtml_legend=1 00:11:17.713 --rc geninfo_all_blocks=1 00:11:17.713 --rc geninfo_unexecuted_blocks=1 00:11:17.713 00:11:17.713 ' 00:11:17.713 11:14:24 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:11:17.713 11:14:24 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:17.713 11:14:24 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:17.713 11:14:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:17.713 11:14:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.713 11:14:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:17.713 ************************************ 00:11:17.713 START TEST nvmf_target_core 00:11:17.713 ************************************ 00:11:17.713 11:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:17.713 * Looking for test storage... 00:11:17.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:17.713 11:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:17.713 11:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:11:17.713 11:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:17.713 11:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:17.713 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.713 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.713 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.713 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.713 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.713 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.713 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.713 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.713 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:17.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.714 --rc genhtml_branch_coverage=1 00:11:17.714 --rc genhtml_function_coverage=1 00:11:17.714 --rc genhtml_legend=1 00:11:17.714 --rc geninfo_all_blocks=1 00:11:17.714 --rc geninfo_unexecuted_blocks=1 00:11:17.714 00:11:17.714 ' 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:17.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.714 --rc genhtml_branch_coverage=1 00:11:17.714 --rc genhtml_function_coverage=1 00:11:17.714 --rc genhtml_legend=1 00:11:17.714 --rc geninfo_all_blocks=1 00:11:17.714 --rc geninfo_unexecuted_blocks=1 00:11:17.714 00:11:17.714 ' 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:17.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.714 --rc genhtml_branch_coverage=1 00:11:17.714 --rc genhtml_function_coverage=1 00:11:17.714 --rc genhtml_legend=1 00:11:17.714 --rc geninfo_all_blocks=1 00:11:17.714 --rc geninfo_unexecuted_blocks=1 00:11:17.714 00:11:17.714 ' 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:17.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.714 --rc genhtml_branch_coverage=1 00:11:17.714 --rc genhtml_function_coverage=1 00:11:17.714 --rc genhtml_legend=1 00:11:17.714 --rc geninfo_all_blocks=1 00:11:17.714 --rc geninfo_unexecuted_blocks=1 00:11:17.714 00:11:17.714 ' 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.714 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:17.714 ************************************ 00:11:17.714 START TEST nvmf_host_management 00:11:17.714 ************************************ 00:11:17.714 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:17.974 * Looking for test storage... 00:11:17.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.974 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:17.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.974 --rc genhtml_branch_coverage=1 00:11:17.974 --rc genhtml_function_coverage=1 00:11:17.974 --rc genhtml_legend=1 00:11:17.974 --rc geninfo_all_blocks=1 00:11:17.974 --rc geninfo_unexecuted_blocks=1 00:11:17.975 00:11:17.975 ' 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:17.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.975 --rc genhtml_branch_coverage=1 00:11:17.975 --rc genhtml_function_coverage=1 00:11:17.975 --rc genhtml_legend=1 00:11:17.975 --rc geninfo_all_blocks=1 00:11:17.975 --rc geninfo_unexecuted_blocks=1 00:11:17.975 00:11:17.975 ' 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:17.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.975 --rc genhtml_branch_coverage=1 00:11:17.975 --rc genhtml_function_coverage=1 00:11:17.975 --rc genhtml_legend=1 00:11:17.975 --rc geninfo_all_blocks=1 00:11:17.975 --rc geninfo_unexecuted_blocks=1 00:11:17.975 00:11:17.975 ' 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:17.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.975 --rc genhtml_branch_coverage=1 00:11:17.975 --rc genhtml_function_coverage=1 00:11:17.975 --rc genhtml_legend=1 00:11:17.975 --rc geninfo_all_blocks=1 00:11:17.975 --rc geninfo_unexecuted_blocks=1 00:11:17.975 00:11:17.975 ' 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.975 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:17.975 Cannot find device "nvmf_init_br" 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:17.975 Cannot find device "nvmf_init_br2" 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:17.975 Cannot find device "nvmf_tgt_br" 00:11:17.975 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:11:17.976 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:17.976 Cannot find device "nvmf_tgt_br2" 00:11:17.976 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:11:17.976 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:17.976 Cannot find device "nvmf_init_br" 00:11:17.976 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:11:17.976 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:18.234 Cannot find device "nvmf_init_br2" 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:18.234 Cannot find device "nvmf_tgt_br" 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:18.234 Cannot find device "nvmf_tgt_br2" 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:18.234 Cannot find device "nvmf_br" 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:18.234 Cannot find device "nvmf_init_if" 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:18.234 Cannot find device "nvmf_init_if2" 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:18.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:18.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:18.234 11:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:18.234 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:18.234 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:18.234 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:18.234 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:18.234 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:18.493 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:18.493 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:18.493 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:18.493 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:18.493 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:18.493 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:18.493 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:18.493 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:18.493 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:18.493 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:18.493 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:18.494 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:18.494 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:11:18.494 00:11:18.494 --- 10.0.0.3 ping statistics --- 00:11:18.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.494 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:18.494 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:18.494 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:11:18.494 00:11:18.494 --- 10.0.0.4 ping statistics --- 00:11:18.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.494 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:18.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:11:18.494 00:11:18.494 --- 10.0.0.1 ping statistics --- 00:11:18.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.494 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:18.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:11:18.494 00:11:18.494 --- 10.0.0.2 ping statistics --- 00:11:18.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.494 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:18.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=65177 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 65177 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65177 ']' 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.494 11:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:18.753 [2024-12-10 11:14:25.375824] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:18.753 [2024-12-10 11:14:25.376726] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.753 [2024-12-10 11:14:25.568959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.012 [2024-12-10 11:14:25.706534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.012 [2024-12-10 11:14:25.706817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.012 [2024-12-10 11:14:25.706858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.012 [2024-12-10 11:14:25.706875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.012 [2024-12-10 11:14:25.706892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.012 [2024-12-10 11:14:25.709093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.012 [2024-12-10 11:14:25.709244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.012 [2024-12-10 11:14:25.709326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:19.012 [2024-12-10 11:14:25.709494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.270 [2024-12-10 11:14:25.927943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.838 [2024-12-10 11:14:26.401423] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.838 Malloc0 00:11:19.838 [2024-12-10 11:14:26.530060] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65231 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65231 /var/tmp/bdevperf.sock 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65231 ']' 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:19.838 { 00:11:19.838 "params": { 00:11:19.838 "name": "Nvme$subsystem", 00:11:19.838 "trtype": "$TEST_TRANSPORT", 00:11:19.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:19.838 "adrfam": "ipv4", 00:11:19.838 "trsvcid": "$NVMF_PORT", 00:11:19.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:19.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:19.838 "hdgst": ${hdgst:-false}, 00:11:19.838 "ddgst": ${ddgst:-false} 00:11:19.838 }, 00:11:19.838 "method": "bdev_nvme_attach_controller" 00:11:19.838 } 00:11:19.838 EOF 00:11:19.838 )") 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:19.838 11:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:19.838 "params": { 00:11:19.838 "name": "Nvme0", 00:11:19.838 "trtype": "tcp", 00:11:19.838 "traddr": "10.0.0.3", 00:11:19.838 "adrfam": "ipv4", 00:11:19.838 "trsvcid": "4420", 00:11:19.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:19.838 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:19.838 "hdgst": false, 00:11:19.838 "ddgst": false 00:11:19.838 }, 00:11:19.838 "method": "bdev_nvme_attach_controller" 00:11:19.838 }' 00:11:20.097 [2024-12-10 11:14:26.687769] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:20.097 [2024-12-10 11:14:26.688116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65231 ] 00:11:20.097 [2024-12-10 11:14:26.862307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.355 [2024-12-10 11:14:26.987295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.613 [2024-12-10 11:14:27.184204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:20.613 Running I/O for 10 seconds... 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:21.182 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.183 11:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:21.183 [2024-12-10 11:14:27.877842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.877937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.878967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.878995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.183 [2024-12-10 11:14:27.879772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.183 [2024-12-10 11:14:27.879788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.879801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.879816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.879829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.879852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.879879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.879901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.879915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.879931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.879945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.879982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.879996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.184 [2024-12-10 11:14:27.880755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.880770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:11:21.184 [2024-12-10 11:14:27.881241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.184 [2024-12-10 11:14:27.881426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.881454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.184 [2024-12-10 11:14:27.881468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.881484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.184 [2024-12-10 11:14:27.881498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.881512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.184 [2024-12-10 11:14:27.881525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.184 [2024-12-10 11:14:27.881538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:11:21.184 [2024-12-10 11:14:27.882809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:11:21.184 task offset: 81920 on job bdev=Nvme0n1 fails 00:11:21.184 00:11:21.184 Latency(us) 00:11:21.184 [2024-12-10T11:14:28.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.184 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:21.184 Job: Nvme0n1 ended in about 0.50 seconds with error 00:11:21.184 Verification LBA range: start 0x0 length 0x400 00:11:21.184 Nvme0n1 : 0.50 1278.24 79.89 127.82 0.00 44195.52 4259.84 41943.04 00:11:21.184 [2024-12-10T11:14:28.010Z] =================================================================================================================== 00:11:21.184 [2024-12-10T11:14:28.010Z] Total : 1278.24 79.89 127.82 0.00 44195.52 4259.84 41943.04 00:11:21.184 [2024-12-10 11:14:27.888202] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:21.184 [2024-12-10 11:14:27.888279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:11:21.185 [2024-12-10 11:14:27.894212] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:11:22.120 11:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65231 00:11:22.120 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65231) - No such process 00:11:22.120 11:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:22.120 11:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:22.120 11:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:22.120 11:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:22.120 11:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:22.120 11:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:22.120 11:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:22.120 11:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:22.120 { 00:11:22.120 "params": { 00:11:22.120 "name": "Nvme$subsystem", 00:11:22.120 "trtype": "$TEST_TRANSPORT", 00:11:22.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:22.120 "adrfam": "ipv4", 00:11:22.120 "trsvcid": "$NVMF_PORT", 00:11:22.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:22.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:22.120 "hdgst": ${hdgst:-false}, 00:11:22.120 "ddgst": ${ddgst:-false} 00:11:22.120 }, 00:11:22.120 "method": "bdev_nvme_attach_controller" 00:11:22.120 } 00:11:22.120 EOF 00:11:22.120 )") 00:11:22.120 11:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:22.120 11:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:22.120 11:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:22.120 11:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:22.120 "params": { 00:11:22.120 "name": "Nvme0", 00:11:22.120 "trtype": "tcp", 00:11:22.120 "traddr": "10.0.0.3", 00:11:22.120 "adrfam": "ipv4", 00:11:22.120 "trsvcid": "4420", 00:11:22.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:22.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:22.120 "hdgst": false, 00:11:22.120 "ddgst": false 00:11:22.121 }, 00:11:22.121 "method": "bdev_nvme_attach_controller" 00:11:22.121 }' 00:11:22.379 [2024-12-10 11:14:29.035990] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:22.379 [2024-12-10 11:14:29.036703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65270 ] 00:11:22.638 [2024-12-10 11:14:29.221604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.638 [2024-12-10 11:14:29.349165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.897 [2024-12-10 11:14:29.551636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:23.156 Running I/O for 1 seconds... 00:11:24.090 1344.00 IOPS, 84.00 MiB/s 00:11:24.090 Latency(us) 00:11:24.090 [2024-12-10T11:14:30.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.090 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:24.090 Verification LBA range: start 0x0 length 0x400 00:11:24.090 Nvme0n1 : 1.04 1355.32 84.71 0.00 0.00 46341.83 5630.14 41228.10 00:11:24.090 [2024-12-10T11:14:30.916Z] =================================================================================================================== 00:11:24.090 [2024-12-10T11:14:30.916Z] Total : 1355.32 84.71 0.00 0.00 46341.83 5630.14 41228.10 00:11:25.024 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:25.024 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:25.024 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:11:25.024 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:11:25.024 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:25.024 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:25.024 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:11:25.024 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.024 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:11:25.024 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.024 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.282 rmmod nvme_tcp 00:11:25.282 rmmod nvme_fabrics 00:11:25.282 rmmod nvme_keyring 00:11:25.282 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.282 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:11:25.282 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:11:25.282 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 65177 ']' 00:11:25.282 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 65177 00:11:25.282 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 65177 ']' 00:11:25.282 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 65177 00:11:25.282 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:11:25.282 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.282 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65177 00:11:25.282 killing process with pid 65177 00:11:25.282 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:25.282 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:25.282 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65177' 00:11:25.282 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 65177 00:11:25.282 11:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 65177 00:11:26.219 [2024-12-10 11:14:33.038561] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:26.477 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:26.736 00:11:26.736 real 0m8.836s 00:11:26.736 user 0m33.450s 00:11:26.736 sys 0m1.797s 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.736 ************************************ 00:11:26.736 END TEST nvmf_host_management 00:11:26.736 ************************************ 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:26.736 ************************************ 00:11:26.736 START TEST nvmf_lvol 00:11:26.736 ************************************ 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:26.736 * Looking for test storage... 00:11:26.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:11:26.736 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:26.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.996 --rc genhtml_branch_coverage=1 00:11:26.996 --rc genhtml_function_coverage=1 00:11:26.996 --rc genhtml_legend=1 00:11:26.996 --rc geninfo_all_blocks=1 00:11:26.996 --rc geninfo_unexecuted_blocks=1 00:11:26.996 00:11:26.996 ' 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:26.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.996 --rc genhtml_branch_coverage=1 00:11:26.996 --rc genhtml_function_coverage=1 00:11:26.996 --rc genhtml_legend=1 00:11:26.996 --rc geninfo_all_blocks=1 00:11:26.996 --rc geninfo_unexecuted_blocks=1 00:11:26.996 00:11:26.996 ' 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:26.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.996 --rc genhtml_branch_coverage=1 00:11:26.996 --rc genhtml_function_coverage=1 00:11:26.996 --rc genhtml_legend=1 00:11:26.996 --rc geninfo_all_blocks=1 00:11:26.996 --rc geninfo_unexecuted_blocks=1 00:11:26.996 00:11:26.996 ' 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:26.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.996 --rc genhtml_branch_coverage=1 00:11:26.996 --rc genhtml_function_coverage=1 00:11:26.996 --rc genhtml_legend=1 00:11:26.996 --rc geninfo_all_blocks=1 00:11:26.996 --rc geninfo_unexecuted_blocks=1 00:11:26.996 00:11:26.996 ' 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.996 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:26.997 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:26.997 Cannot find device "nvmf_init_br" 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:26.997 Cannot find device "nvmf_init_br2" 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:26.997 Cannot find device "nvmf_tgt_br" 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:26.997 Cannot find device "nvmf_tgt_br2" 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:26.997 Cannot find device "nvmf_init_br" 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:26.997 Cannot find device "nvmf_init_br2" 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:26.997 Cannot find device "nvmf_tgt_br" 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:26.997 Cannot find device "nvmf_tgt_br2" 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:26.997 Cannot find device "nvmf_br" 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:26.997 Cannot find device "nvmf_init_if" 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:26.997 Cannot find device "nvmf_init_if2" 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:26.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:26.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:26.997 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:27.256 11:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:27.256 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:27.256 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:27.256 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:27.256 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:27.256 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:27.256 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:27.256 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:11:27.256 00:11:27.256 --- 10.0.0.3 ping statistics --- 00:11:27.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.256 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:27.256 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:27.256 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:27.256 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:11:27.256 00:11:27.256 --- 10.0.0.4 ping statistics --- 00:11:27.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.256 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:27.256 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:27.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:11:27.256 00:11:27.256 --- 10.0.0.1 ping statistics --- 00:11:27.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.257 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:27.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:11:27.257 00:11:27.257 --- 10.0.0.2 ping statistics --- 00:11:27.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.257 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65568 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65568 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 65568 ']' 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.257 11:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:27.515 [2024-12-10 11:14:34.167720] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:27.515 [2024-12-10 11:14:34.168374] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.773 [2024-12-10 11:14:34.351373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:27.773 [2024-12-10 11:14:34.459045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.773 [2024-12-10 11:14:34.459113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.773 [2024-12-10 11:14:34.459133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.773 [2024-12-10 11:14:34.459145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.773 [2024-12-10 11:14:34.459160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.773 [2024-12-10 11:14:34.461061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.773 [2024-12-10 11:14:34.461148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.773 [2024-12-10 11:14:34.461182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.031 [2024-12-10 11:14:34.648968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:28.596 11:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.596 11:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:11:28.596 11:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:28.596 11:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:28.596 11:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:28.596 11:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.596 11:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:28.853 [2024-12-10 11:14:35.614305] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.853 11:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:29.419 11:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:29.419 11:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:29.710 11:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:29.710 11:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:29.995 11:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:30.559 11:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=92a0bb76-abc1-4183-92f8-da77bc973534 00:11:30.559 11:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 92a0bb76-abc1-4183-92f8-da77bc973534 lvol 20 00:11:30.817 11:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1431112b-11f6-4882-9f8c-027355bd0923 00:11:30.817 11:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:31.075 11:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1431112b-11f6-4882-9f8c-027355bd0923 00:11:31.333 11:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:11:31.591 [2024-12-10 11:14:38.331394] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:31.591 11:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:31.849 11:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65649 00:11:31.849 11:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:31.849 11:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:33.222 11:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 1431112b-11f6-4882-9f8c-027355bd0923 MY_SNAPSHOT 00:11:33.222 11:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f1163a69-e34f-4fba-80e9-6b3aba058182 00:11:33.222 11:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 1431112b-11f6-4882-9f8c-027355bd0923 30 00:11:33.788 11:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone f1163a69-e34f-4fba-80e9-6b3aba058182 MY_CLONE 00:11:34.354 11:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e6f29796-9363-4d42-aa7d-01260295f77d 00:11:34.354 11:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e6f29796-9363-4d42-aa7d-01260295f77d 00:11:34.921 11:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65649 00:11:43.030 Initializing NVMe Controllers 00:11:43.030 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:11:43.030 Controller IO queue size 128, less than required. 00:11:43.030 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:43.030 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:43.030 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:43.030 Initialization complete. Launching workers. 00:11:43.030 ======================================================== 00:11:43.030 Latency(us) 00:11:43.030 Device Information : IOPS MiB/s Average min max 00:11:43.030 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8497.67 33.19 15075.72 619.66 175939.29 00:11:43.030 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8301.49 32.43 15423.01 4296.37 151228.92 00:11:43.030 ======================================================== 00:11:43.030 Total : 16799.16 65.62 15247.34 619.66 175939.29 00:11:43.030 00:11:43.030 11:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:43.030 11:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1431112b-11f6-4882-9f8c-027355bd0923 00:11:43.030 11:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 92a0bb76-abc1-4183-92f8-da77bc973534 00:11:43.288 11:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:43.288 11:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:43.288 11:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:43.288 11:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.288 11:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:43.288 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:43.288 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:43.288 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.288 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:43.288 rmmod nvme_tcp 00:11:43.288 rmmod nvme_fabrics 00:11:43.288 rmmod nvme_keyring 00:11:43.288 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.288 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:43.288 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:43.288 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65568 ']' 00:11:43.288 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65568 00:11:43.288 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 65568 ']' 00:11:43.288 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 65568 00:11:43.288 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:11:43.288 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.288 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65568 00:11:43.546 killing process with pid 65568 00:11:43.546 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.546 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.546 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65568' 00:11:43.546 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 65568 00:11:43.546 11:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 65568 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:11:44.963 00:11:44.963 real 0m18.295s 00:11:44.963 user 1m13.174s 00:11:44.963 sys 0m4.219s 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:44.963 ************************************ 00:11:44.963 END TEST nvmf_lvol 00:11:44.963 ************************************ 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:44.963 ************************************ 00:11:44.963 START TEST nvmf_lvs_grow 00:11:44.963 ************************************ 00:11:44.963 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:45.223 * Looking for test storage... 00:11:45.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:45.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.223 --rc genhtml_branch_coverage=1 00:11:45.223 --rc genhtml_function_coverage=1 00:11:45.223 --rc genhtml_legend=1 00:11:45.223 --rc geninfo_all_blocks=1 00:11:45.223 --rc geninfo_unexecuted_blocks=1 00:11:45.223 00:11:45.223 ' 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:45.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.223 --rc genhtml_branch_coverage=1 00:11:45.223 --rc genhtml_function_coverage=1 00:11:45.223 --rc genhtml_legend=1 00:11:45.223 --rc geninfo_all_blocks=1 00:11:45.223 --rc geninfo_unexecuted_blocks=1 00:11:45.223 00:11:45.223 ' 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:45.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.223 --rc genhtml_branch_coverage=1 00:11:45.223 --rc genhtml_function_coverage=1 00:11:45.223 --rc genhtml_legend=1 00:11:45.223 --rc geninfo_all_blocks=1 00:11:45.223 --rc geninfo_unexecuted_blocks=1 00:11:45.223 00:11:45.223 ' 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:45.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.223 --rc genhtml_branch_coverage=1 00:11:45.223 --rc genhtml_function_coverage=1 00:11:45.223 --rc genhtml_legend=1 00:11:45.223 --rc geninfo_all_blocks=1 00:11:45.223 --rc geninfo_unexecuted_blocks=1 00:11:45.223 00:11:45.223 ' 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.223 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.224 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:45.224 Cannot find device "nvmf_init_br" 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:45.224 Cannot find device "nvmf_init_br2" 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:11:45.224 11:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:45.224 Cannot find device "nvmf_tgt_br" 00:11:45.224 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:11:45.224 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:45.224 Cannot find device "nvmf_tgt_br2" 00:11:45.224 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:11:45.224 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:45.224 Cannot find device "nvmf_init_br" 00:11:45.224 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:11:45.224 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:45.224 Cannot find device "nvmf_init_br2" 00:11:45.224 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:11:45.224 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:45.224 Cannot find device "nvmf_tgt_br" 00:11:45.224 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:11:45.224 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:45.483 Cannot find device "nvmf_tgt_br2" 00:11:45.483 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:11:45.483 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:45.483 Cannot find device "nvmf_br" 00:11:45.483 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:11:45.483 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:45.483 Cannot find device "nvmf_init_if" 00:11:45.483 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:11:45.483 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:45.483 Cannot find device "nvmf_init_if2" 00:11:45.483 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:45.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:45.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:45.484 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:45.742 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:45.742 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:11:45.742 00:11:45.742 --- 10.0.0.3 ping statistics --- 00:11:45.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.742 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:45.742 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:45.742 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:11:45.742 00:11:45.742 --- 10.0.0.4 ping statistics --- 00:11:45.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.742 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:45.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:11:45.742 00:11:45.742 --- 10.0.0.1 ping statistics --- 00:11:45.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.742 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:45.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:11:45.742 00:11:45.742 --- 10.0.0.2 ping statistics --- 00:11:45.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.742 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.742 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:45.743 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:45.743 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:45.743 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:45.743 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:45.743 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:45.743 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=66050 00:11:45.743 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:45.743 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 66050 00:11:45.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.743 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 66050 ']' 00:11:45.743 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.743 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.743 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.743 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.743 11:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:45.743 [2024-12-10 11:14:52.474284] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:45.743 [2024-12-10 11:14:52.474495] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.001 [2024-12-10 11:14:52.648856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.001 [2024-12-10 11:14:52.757489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.001 [2024-12-10 11:14:52.757549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.001 [2024-12-10 11:14:52.757569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.001 [2024-12-10 11:14:52.757594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.001 [2024-12-10 11:14:52.757608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.001 [2024-12-10 11:14:52.758834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.259 [2024-12-10 11:14:52.941949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:46.826 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.826 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:11:46.826 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:46.826 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.826 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:46.826 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.826 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:47.084 [2024-12-10 11:14:53.862814] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.084 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:47.084 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:47.084 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.084 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:47.084 ************************************ 00:11:47.084 START TEST lvs_grow_clean 00:11:47.084 ************************************ 00:11:47.084 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:11:47.084 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:47.084 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:47.084 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:47.084 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:47.084 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:47.084 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:47.084 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:47.085 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:47.085 11:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:47.651 11:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:47.651 11:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:47.909 11:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3141f566-5a68-4d39-b53f-39c5c066e210 00:11:47.909 11:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3141f566-5a68-4d39-b53f-39c5c066e210 00:11:47.909 11:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:48.167 11:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:48.167 11:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:48.167 11:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3141f566-5a68-4d39-b53f-39c5c066e210 lvol 150 00:11:48.427 11:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a76ae97d-a8af-46fe-9daa-b562aaad0ea1 00:11:48.427 11:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:48.427 11:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:48.686 [2024-12-10 11:14:55.378021] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:48.686 [2024-12-10 11:14:55.378137] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:48.686 true 00:11:48.686 11:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:48.686 11:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3141f566-5a68-4d39-b53f-39c5c066e210 00:11:48.944 11:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:48.944 11:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:49.202 11:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a76ae97d-a8af-46fe-9daa-b562aaad0ea1 00:11:49.460 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:11:49.723 [2024-12-10 11:14:56.515012] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:49.723 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:50.289 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66138 00:11:50.290 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:50.290 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:50.290 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66138 /var/tmp/bdevperf.sock 00:11:50.290 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 66138 ']' 00:11:50.290 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:50.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:50.290 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.290 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:50.290 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.290 11:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:50.290 [2024-12-10 11:14:56.936870] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:11:50.290 [2024-12-10 11:14:56.937033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66138 ] 00:11:50.548 [2024-12-10 11:14:57.122231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.548 [2024-12-10 11:14:57.246561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.806 [2024-12-10 11:14:57.464822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:51.065 11:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.065 11:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:11:51.065 11:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:51.631 Nvme0n1 00:11:51.631 11:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:51.889 [ 00:11:51.889 { 00:11:51.889 "name": "Nvme0n1", 00:11:51.889 "aliases": [ 00:11:51.889 "a76ae97d-a8af-46fe-9daa-b562aaad0ea1" 00:11:51.889 ], 00:11:51.889 "product_name": "NVMe disk", 00:11:51.889 "block_size": 4096, 00:11:51.889 "num_blocks": 38912, 00:11:51.889 "uuid": "a76ae97d-a8af-46fe-9daa-b562aaad0ea1", 00:11:51.889 "numa_id": -1, 00:11:51.889 "assigned_rate_limits": { 00:11:51.889 "rw_ios_per_sec": 0, 00:11:51.889 "rw_mbytes_per_sec": 0, 00:11:51.889 "r_mbytes_per_sec": 0, 00:11:51.889 "w_mbytes_per_sec": 0 00:11:51.889 }, 00:11:51.889 "claimed": false, 00:11:51.889 "zoned": false, 00:11:51.889 "supported_io_types": { 00:11:51.889 "read": true, 00:11:51.889 "write": true, 00:11:51.889 "unmap": true, 00:11:51.889 "flush": true, 00:11:51.889 "reset": true, 00:11:51.889 "nvme_admin": true, 00:11:51.889 "nvme_io": true, 00:11:51.889 "nvme_io_md": false, 00:11:51.889 "write_zeroes": true, 00:11:51.889 "zcopy": false, 00:11:51.889 "get_zone_info": false, 00:11:51.889 "zone_management": false, 00:11:51.889 "zone_append": false, 00:11:51.889 "compare": true, 00:11:51.889 "compare_and_write": true, 00:11:51.889 "abort": true, 00:11:51.889 "seek_hole": false, 00:11:51.889 "seek_data": false, 00:11:51.889 "copy": true, 00:11:51.889 "nvme_iov_md": false 00:11:51.889 }, 00:11:51.889 "memory_domains": [ 00:11:51.889 { 00:11:51.889 "dma_device_id": "system", 00:11:51.889 "dma_device_type": 1 00:11:51.889 } 00:11:51.889 ], 00:11:51.889 "driver_specific": { 00:11:51.889 "nvme": [ 00:11:51.889 { 00:11:51.889 "trid": { 00:11:51.889 "trtype": "TCP", 00:11:51.889 "adrfam": "IPv4", 00:11:51.889 "traddr": "10.0.0.3", 00:11:51.889 "trsvcid": "4420", 00:11:51.889 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:51.889 }, 00:11:51.889 "ctrlr_data": { 00:11:51.889 "cntlid": 1, 00:11:51.889 "vendor_id": "0x8086", 00:11:51.889 "model_number": "SPDK bdev Controller", 00:11:51.889 "serial_number": "SPDK0", 00:11:51.889 "firmware_revision": "25.01", 00:11:51.889 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:51.889 "oacs": { 00:11:51.889 "security": 0, 00:11:51.889 "format": 0, 00:11:51.890 "firmware": 0, 00:11:51.890 "ns_manage": 0 00:11:51.890 }, 00:11:51.890 "multi_ctrlr": true, 00:11:51.890 "ana_reporting": false 00:11:51.890 }, 00:11:51.890 "vs": { 00:11:51.890 "nvme_version": "1.3" 00:11:51.890 }, 00:11:51.890 "ns_data": { 00:11:51.890 "id": 1, 00:11:51.890 "can_share": true 00:11:51.890 } 00:11:51.890 } 00:11:51.890 ], 00:11:51.890 "mp_policy": "active_passive" 00:11:51.890 } 00:11:51.890 } 00:11:51.890 ] 00:11:51.890 11:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66167 00:11:51.890 11:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:51.890 11:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:51.890 Running I/O for 10 seconds... 00:11:52.824 Latency(us) 00:11:52.824 [2024-12-10T11:14:59.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:52.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.824 Nvme0n1 : 1.00 5717.00 22.33 0.00 0.00 0.00 0.00 0.00 00:11:52.824 [2024-12-10T11:14:59.650Z] =================================================================================================================== 00:11:52.824 [2024-12-10T11:14:59.650Z] Total : 5717.00 22.33 0.00 0.00 0.00 0.00 0.00 00:11:52.824 00:11:53.759 11:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3141f566-5a68-4d39-b53f-39c5c066e210 00:11:54.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.017 Nvme0n1 : 2.00 5779.50 22.58 0.00 0.00 0.00 0.00 0.00 00:11:54.017 [2024-12-10T11:15:00.843Z] =================================================================================================================== 00:11:54.017 [2024-12-10T11:15:00.843Z] Total : 5779.50 22.58 0.00 0.00 0.00 0.00 0.00 00:11:54.017 00:11:54.017 true 00:11:54.017 11:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3141f566-5a68-4d39-b53f-39c5c066e210 00:11:54.017 11:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:54.584 11:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:54.584 11:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:54.584 11:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66167 00:11:54.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.842 Nvme0n1 : 3.00 5712.67 22.32 0.00 0.00 0.00 0.00 0.00 00:11:54.842 [2024-12-10T11:15:01.668Z] =================================================================================================================== 00:11:54.842 [2024-12-10T11:15:01.668Z] Total : 5712.67 22.32 0.00 0.00 0.00 0.00 0.00 00:11:54.842 00:11:56.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.217 Nvme0n1 : 4.00 5713.25 22.32 0.00 0.00 0.00 0.00 0.00 00:11:56.217 [2024-12-10T11:15:03.043Z] =================================================================================================================== 00:11:56.217 [2024-12-10T11:15:03.043Z] Total : 5713.25 22.32 0.00 0.00 0.00 0.00 0.00 00:11:56.217 00:11:57.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.156 Nvme0n1 : 5.00 5688.20 22.22 0.00 0.00 0.00 0.00 0.00 00:11:57.156 [2024-12-10T11:15:03.982Z] =================================================================================================================== 00:11:57.156 [2024-12-10T11:15:03.982Z] Total : 5688.20 22.22 0.00 0.00 0.00 0.00 0.00 00:11:57.156 00:11:58.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.092 Nvme0n1 : 6.00 5650.33 22.07 0.00 0.00 0.00 0.00 0.00 00:11:58.092 [2024-12-10T11:15:04.918Z] =================================================================================================================== 00:11:58.092 [2024-12-10T11:15:04.918Z] Total : 5650.33 22.07 0.00 0.00 0.00 0.00 0.00 00:11:58.092 00:11:59.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.027 Nvme0n1 : 7.00 5623.29 21.97 0.00 0.00 0.00 0.00 0.00 00:11:59.027 [2024-12-10T11:15:05.853Z] =================================================================================================================== 00:11:59.027 [2024-12-10T11:15:05.853Z] Total : 5623.29 21.97 0.00 0.00 0.00 0.00 0.00 00:11:59.027 00:11:59.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.960 Nvme0n1 : 8.00 5634.75 22.01 0.00 0.00 0.00 0.00 0.00 00:11:59.960 [2024-12-10T11:15:06.786Z] =================================================================================================================== 00:11:59.960 [2024-12-10T11:15:06.786Z] Total : 5634.75 22.01 0.00 0.00 0.00 0.00 0.00 00:11:59.960 00:12:00.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.897 Nvme0n1 : 9.00 5629.56 21.99 0.00 0.00 0.00 0.00 0.00 00:12:00.897 [2024-12-10T11:15:07.723Z] =================================================================================================================== 00:12:00.897 [2024-12-10T11:15:07.723Z] Total : 5629.56 21.99 0.00 0.00 0.00 0.00 0.00 00:12:00.897 00:12:01.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.831 Nvme0n1 : 10.00 5625.40 21.97 0.00 0.00 0.00 0.00 0.00 00:12:01.831 [2024-12-10T11:15:08.657Z] =================================================================================================================== 00:12:01.831 [2024-12-10T11:15:08.657Z] Total : 5625.40 21.97 0.00 0.00 0.00 0.00 0.00 00:12:01.831 00:12:02.089 00:12:02.089 Latency(us) 00:12:02.089 [2024-12-10T11:15:08.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.089 Nvme0n1 : 10.02 5627.13 21.98 0.00 0.00 22738.39 2591.65 81026.33 00:12:02.089 [2024-12-10T11:15:08.915Z] =================================================================================================================== 00:12:02.089 [2024-12-10T11:15:08.915Z] Total : 5627.13 21.98 0.00 0.00 22738.39 2591.65 81026.33 00:12:02.089 { 00:12:02.089 "results": [ 00:12:02.089 { 00:12:02.089 "job": "Nvme0n1", 00:12:02.089 "core_mask": "0x2", 00:12:02.089 "workload": "randwrite", 00:12:02.089 "status": "finished", 00:12:02.089 "queue_depth": 128, 00:12:02.089 "io_size": 4096, 00:12:02.089 "runtime": 10.019669, 00:12:02.089 "iops": 5627.131994080843, 00:12:02.089 "mibps": 21.98098435187829, 00:12:02.089 "io_failed": 0, 00:12:02.089 "io_timeout": 0, 00:12:02.089 "avg_latency_us": 22738.391068200362, 00:12:02.089 "min_latency_us": 2591.650909090909, 00:12:02.089 "max_latency_us": 81026.32727272727 00:12:02.089 } 00:12:02.089 ], 00:12:02.089 "core_count": 1 00:12:02.089 } 00:12:02.089 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66138 00:12:02.089 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 66138 ']' 00:12:02.089 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 66138 00:12:02.089 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:12:02.089 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.089 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66138 00:12:02.089 killing process with pid 66138 00:12:02.089 Received shutdown signal, test time was about 10.000000 seconds 00:12:02.089 00:12:02.089 Latency(us) 00:12:02.090 [2024-12-10T11:15:08.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.090 [2024-12-10T11:15:08.916Z] =================================================================================================================== 00:12:02.090 [2024-12-10T11:15:08.916Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:02.090 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:02.090 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:02.090 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66138' 00:12:02.090 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 66138 00:12:02.090 11:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 66138 00:12:03.025 11:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:03.283 11:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:03.542 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3141f566-5a68-4d39-b53f-39c5c066e210 00:12:03.542 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:04.110 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:04.110 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:04.110 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:04.368 [2024-12-10 11:15:10.950770] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:04.368 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3141f566-5a68-4d39-b53f-39c5c066e210 00:12:04.368 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:12:04.368 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3141f566-5a68-4d39-b53f-39c5c066e210 00:12:04.368 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:04.368 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.368 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:04.368 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.368 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:04.368 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.368 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:04.368 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:04.369 11:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3141f566-5a68-4d39-b53f-39c5c066e210 00:12:04.627 request: 00:12:04.627 { 00:12:04.627 "uuid": "3141f566-5a68-4d39-b53f-39c5c066e210", 00:12:04.627 "method": "bdev_lvol_get_lvstores", 00:12:04.627 "req_id": 1 00:12:04.627 } 00:12:04.627 Got JSON-RPC error response 00:12:04.627 response: 00:12:04.627 { 00:12:04.627 "code": -19, 00:12:04.627 "message": "No such device" 00:12:04.627 } 00:12:04.627 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:12:04.627 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:04.627 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:04.627 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:04.627 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:04.886 aio_bdev 00:12:04.886 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a76ae97d-a8af-46fe-9daa-b562aaad0ea1 00:12:04.886 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a76ae97d-a8af-46fe-9daa-b562aaad0ea1 00:12:04.886 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.886 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:12:04.886 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.886 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.886 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:05.145 11:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a76ae97d-a8af-46fe-9daa-b562aaad0ea1 -t 2000 00:12:05.712 [ 00:12:05.712 { 00:12:05.712 "name": "a76ae97d-a8af-46fe-9daa-b562aaad0ea1", 00:12:05.712 "aliases": [ 00:12:05.712 "lvs/lvol" 00:12:05.712 ], 00:12:05.712 "product_name": "Logical Volume", 00:12:05.712 "block_size": 4096, 00:12:05.712 "num_blocks": 38912, 00:12:05.712 "uuid": "a76ae97d-a8af-46fe-9daa-b562aaad0ea1", 00:12:05.712 "assigned_rate_limits": { 00:12:05.712 "rw_ios_per_sec": 0, 00:12:05.712 "rw_mbytes_per_sec": 0, 00:12:05.712 "r_mbytes_per_sec": 0, 00:12:05.712 "w_mbytes_per_sec": 0 00:12:05.712 }, 00:12:05.712 "claimed": false, 00:12:05.712 "zoned": false, 00:12:05.712 "supported_io_types": { 00:12:05.712 "read": true, 00:12:05.712 "write": true, 00:12:05.712 "unmap": true, 00:12:05.712 "flush": false, 00:12:05.712 "reset": true, 00:12:05.712 "nvme_admin": false, 00:12:05.712 "nvme_io": false, 00:12:05.712 "nvme_io_md": false, 00:12:05.712 "write_zeroes": true, 00:12:05.712 "zcopy": false, 00:12:05.712 "get_zone_info": false, 00:12:05.712 "zone_management": false, 00:12:05.712 "zone_append": false, 00:12:05.712 "compare": false, 00:12:05.712 "compare_and_write": false, 00:12:05.712 "abort": false, 00:12:05.712 "seek_hole": true, 00:12:05.712 "seek_data": true, 00:12:05.712 "copy": false, 00:12:05.712 "nvme_iov_md": false 00:12:05.712 }, 00:12:05.712 "driver_specific": { 00:12:05.712 "lvol": { 00:12:05.712 "lvol_store_uuid": "3141f566-5a68-4d39-b53f-39c5c066e210", 00:12:05.712 "base_bdev": "aio_bdev", 00:12:05.712 "thin_provision": false, 00:12:05.712 "num_allocated_clusters": 38, 00:12:05.712 "snapshot": false, 00:12:05.712 "clone": false, 00:12:05.712 "esnap_clone": false 00:12:05.712 } 00:12:05.712 } 00:12:05.712 } 00:12:05.712 ] 00:12:05.712 11:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:12:05.712 11:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3141f566-5a68-4d39-b53f-39c5c066e210 00:12:05.712 11:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:05.971 11:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:05.971 11:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3141f566-5a68-4d39-b53f-39c5c066e210 00:12:05.971 11:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:06.229 11:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:06.229 11:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a76ae97d-a8af-46fe-9daa-b562aaad0ea1 00:12:06.488 11:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3141f566-5a68-4d39-b53f-39c5c066e210 00:12:06.746 11:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:07.005 11:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:07.263 00:12:07.263 real 0m20.189s 00:12:07.263 user 0m19.279s 00:12:07.263 sys 0m2.521s 00:12:07.523 ************************************ 00:12:07.523 END TEST lvs_grow_clean 00:12:07.523 ************************************ 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:07.523 ************************************ 00:12:07.523 START TEST lvs_grow_dirty 00:12:07.523 ************************************ 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:07.523 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:07.782 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:07.782 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:08.040 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0f47fe25-724d-4556-8e9e-0875452d3120 00:12:08.040 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f47fe25-724d-4556-8e9e-0875452d3120 00:12:08.040 11:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:08.298 11:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:08.298 11:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:08.298 11:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0f47fe25-724d-4556-8e9e-0875452d3120 lvol 150 00:12:08.864 11:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b8abc3eb-038e-4230-92a9-15b84ee4016e 00:12:08.864 11:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:08.864 11:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:09.122 [2024-12-10 11:15:15.689097] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:09.122 [2024-12-10 11:15:15.689215] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:09.122 true 00:12:09.122 11:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f47fe25-724d-4556-8e9e-0875452d3120 00:12:09.122 11:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:09.382 11:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:09.382 11:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:09.641 11:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b8abc3eb-038e-4230-92a9-15b84ee4016e 00:12:10.207 11:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:12:10.465 [2024-12-10 11:15:17.058137] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:10.465 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:10.724 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66438 00:12:10.724 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:10.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:10.724 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:10.724 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66438 /var/tmp/bdevperf.sock 00:12:10.724 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66438 ']' 00:12:10.724 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:10.724 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.724 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:10.724 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.724 11:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:10.724 [2024-12-10 11:15:17.500372] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:12:10.724 [2024-12-10 11:15:17.500537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66438 ] 00:12:10.983 [2024-12-10 11:15:17.701257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.290 [2024-12-10 11:15:17.837098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.290 [2024-12-10 11:15:18.064909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:11.857 11:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.857 11:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:11.857 11:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:12.116 Nvme0n1 00:12:12.116 11:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:12.375 [ 00:12:12.375 { 00:12:12.375 "name": "Nvme0n1", 00:12:12.375 "aliases": [ 00:12:12.375 "b8abc3eb-038e-4230-92a9-15b84ee4016e" 00:12:12.375 ], 00:12:12.375 "product_name": "NVMe disk", 00:12:12.375 "block_size": 4096, 00:12:12.375 "num_blocks": 38912, 00:12:12.375 "uuid": "b8abc3eb-038e-4230-92a9-15b84ee4016e", 00:12:12.375 "numa_id": -1, 00:12:12.375 "assigned_rate_limits": { 00:12:12.375 "rw_ios_per_sec": 0, 00:12:12.375 "rw_mbytes_per_sec": 0, 00:12:12.375 "r_mbytes_per_sec": 0, 00:12:12.375 "w_mbytes_per_sec": 0 00:12:12.375 }, 00:12:12.375 "claimed": false, 00:12:12.375 "zoned": false, 00:12:12.375 "supported_io_types": { 00:12:12.375 "read": true, 00:12:12.375 "write": true, 00:12:12.375 "unmap": true, 00:12:12.375 "flush": true, 00:12:12.375 "reset": true, 00:12:12.375 "nvme_admin": true, 00:12:12.375 "nvme_io": true, 00:12:12.375 "nvme_io_md": false, 00:12:12.375 "write_zeroes": true, 00:12:12.375 "zcopy": false, 00:12:12.375 "get_zone_info": false, 00:12:12.375 "zone_management": false, 00:12:12.375 "zone_append": false, 00:12:12.375 "compare": true, 00:12:12.375 "compare_and_write": true, 00:12:12.375 "abort": true, 00:12:12.375 "seek_hole": false, 00:12:12.375 "seek_data": false, 00:12:12.375 "copy": true, 00:12:12.375 "nvme_iov_md": false 00:12:12.375 }, 00:12:12.375 "memory_domains": [ 00:12:12.375 { 00:12:12.375 "dma_device_id": "system", 00:12:12.375 "dma_device_type": 1 00:12:12.375 } 00:12:12.375 ], 00:12:12.375 "driver_specific": { 00:12:12.375 "nvme": [ 00:12:12.375 { 00:12:12.375 "trid": { 00:12:12.375 "trtype": "TCP", 00:12:12.375 "adrfam": "IPv4", 00:12:12.375 "traddr": "10.0.0.3", 00:12:12.375 "trsvcid": "4420", 00:12:12.375 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:12.375 }, 00:12:12.375 "ctrlr_data": { 00:12:12.375 "cntlid": 1, 00:12:12.375 "vendor_id": "0x8086", 00:12:12.375 "model_number": "SPDK bdev Controller", 00:12:12.375 "serial_number": "SPDK0", 00:12:12.375 "firmware_revision": "25.01", 00:12:12.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:12.375 "oacs": { 00:12:12.375 "security": 0, 00:12:12.375 "format": 0, 00:12:12.375 "firmware": 0, 00:12:12.375 "ns_manage": 0 00:12:12.375 }, 00:12:12.375 "multi_ctrlr": true, 00:12:12.375 "ana_reporting": false 00:12:12.375 }, 00:12:12.375 "vs": { 00:12:12.375 "nvme_version": "1.3" 00:12:12.375 }, 00:12:12.375 "ns_data": { 00:12:12.375 "id": 1, 00:12:12.375 "can_share": true 00:12:12.375 } 00:12:12.375 } 00:12:12.375 ], 00:12:12.375 "mp_policy": "active_passive" 00:12:12.375 } 00:12:12.375 } 00:12:12.375 ] 00:12:12.375 11:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66464 00:12:12.375 11:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:12.375 11:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:12.634 Running I/O for 10 seconds... 00:12:13.570 Latency(us) 00:12:13.570 [2024-12-10T11:15:20.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:13.570 Nvme0n1 : 1.00 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:12:13.570 [2024-12-10T11:15:20.396Z] =================================================================================================================== 00:12:13.570 [2024-12-10T11:15:20.396Z] Total : 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:12:13.570 00:12:14.505 11:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0f47fe25-724d-4556-8e9e-0875452d3120 00:12:14.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.505 Nvme0n1 : 2.00 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:12:14.505 [2024-12-10T11:15:21.331Z] =================================================================================================================== 00:12:14.505 [2024-12-10T11:15:21.331Z] Total : 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:12:14.505 00:12:14.764 true 00:12:14.764 11:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f47fe25-724d-4556-8e9e-0875452d3120 00:12:14.764 11:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:15.338 11:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:15.338 11:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:15.338 11:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66464 00:12:15.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.596 Nvme0n1 : 3.00 5398.00 21.09 0.00 0.00 0.00 0.00 0.00 00:12:15.597 [2024-12-10T11:15:22.423Z] =================================================================================================================== 00:12:15.597 [2024-12-10T11:15:22.423Z] Total : 5398.00 21.09 0.00 0.00 0.00 0.00 0.00 00:12:15.597 00:12:16.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.531 Nvme0n1 : 4.00 5445.50 21.27 0.00 0.00 0.00 0.00 0.00 00:12:16.531 [2024-12-10T11:15:23.357Z] =================================================================================================================== 00:12:16.531 [2024-12-10T11:15:23.357Z] Total : 5445.50 21.27 0.00 0.00 0.00 0.00 0.00 00:12:16.531 00:12:17.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.904 Nvme0n1 : 5.00 5448.60 21.28 0.00 0.00 0.00 0.00 0.00 00:12:17.904 [2024-12-10T11:15:24.730Z] =================================================================================================================== 00:12:17.904 [2024-12-10T11:15:24.730Z] Total : 5448.60 21.28 0.00 0.00 0.00 0.00 0.00 00:12:17.904 00:12:18.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.837 Nvme0n1 : 6.00 5429.50 21.21 0.00 0.00 0.00 0.00 0.00 00:12:18.837 [2024-12-10T11:15:25.663Z] =================================================================================================================== 00:12:18.837 [2024-12-10T11:15:25.663Z] Total : 5429.50 21.21 0.00 0.00 0.00 0.00 0.00 00:12:18.837 00:12:19.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.773 Nvme0n1 : 7.00 5452.14 21.30 0.00 0.00 0.00 0.00 0.00 00:12:19.773 [2024-12-10T11:15:26.599Z] =================================================================================================================== 00:12:19.773 [2024-12-10T11:15:26.599Z] Total : 5452.14 21.30 0.00 0.00 0.00 0.00 0.00 00:12:19.773 00:12:20.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.708 Nvme0n1 : 8.00 5469.12 21.36 0.00 0.00 0.00 0.00 0.00 00:12:20.708 [2024-12-10T11:15:27.534Z] =================================================================================================================== 00:12:20.708 [2024-12-10T11:15:27.534Z] Total : 5469.12 21.36 0.00 0.00 0.00 0.00 0.00 00:12:20.708 00:12:21.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.642 Nvme0n1 : 9.00 5411.78 21.14 0.00 0.00 0.00 0.00 0.00 00:12:21.642 [2024-12-10T11:15:28.468Z] =================================================================================================================== 00:12:21.642 [2024-12-10T11:15:28.468Z] Total : 5411.78 21.14 0.00 0.00 0.00 0.00 0.00 00:12:21.642 00:12:22.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.635 Nvme0n1 : 10.00 5416.70 21.16 0.00 0.00 0.00 0.00 0.00 00:12:22.635 [2024-12-10T11:15:29.461Z] =================================================================================================================== 00:12:22.635 [2024-12-10T11:15:29.461Z] Total : 5416.70 21.16 0.00 0.00 0.00 0.00 0.00 00:12:22.635 00:12:22.635 00:12:22.635 Latency(us) 00:12:22.635 [2024-12-10T11:15:29.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.635 Nvme0n1 : 10.01 5425.30 21.19 0.00 0.00 23585.79 16920.20 100567.97 00:12:22.635 [2024-12-10T11:15:29.461Z] =================================================================================================================== 00:12:22.635 [2024-12-10T11:15:29.461Z] Total : 5425.30 21.19 0.00 0.00 23585.79 16920.20 100567.97 00:12:22.635 { 00:12:22.635 "results": [ 00:12:22.635 { 00:12:22.635 "job": "Nvme0n1", 00:12:22.635 "core_mask": "0x2", 00:12:22.635 "workload": "randwrite", 00:12:22.635 "status": "finished", 00:12:22.635 "queue_depth": 128, 00:12:22.635 "io_size": 4096, 00:12:22.635 "runtime": 10.007733, 00:12:22.635 "iops": 5425.304611943584, 00:12:22.635 "mibps": 21.192596140404625, 00:12:22.635 "io_failed": 0, 00:12:22.635 "io_timeout": 0, 00:12:22.635 "avg_latency_us": 23585.786916391095, 00:12:22.635 "min_latency_us": 16920.203636363636, 00:12:22.635 "max_latency_us": 100567.9709090909 00:12:22.635 } 00:12:22.635 ], 00:12:22.635 "core_count": 1 00:12:22.635 } 00:12:22.635 11:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66438 00:12:22.636 11:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 66438 ']' 00:12:22.636 11:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 66438 00:12:22.636 11:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:12:22.636 11:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.636 11:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66438 00:12:22.636 killing process with pid 66438 00:12:22.636 Received shutdown signal, test time was about 10.000000 seconds 00:12:22.636 00:12:22.636 Latency(us) 00:12:22.636 [2024-12-10T11:15:29.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.636 [2024-12-10T11:15:29.462Z] =================================================================================================================== 00:12:22.636 [2024-12-10T11:15:29.462Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:22.636 11:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:22.636 11:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:22.636 11:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66438' 00:12:22.636 11:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 66438 00:12:22.636 11:15:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 66438 00:12:23.579 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:24.147 11:15:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:24.405 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f47fe25-724d-4556-8e9e-0875452d3120 00:12:24.405 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66050 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66050 00:12:24.663 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66050 Killed "${NVMF_APP[@]}" "$@" 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=66610 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 66610 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66610 ']' 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.663 11:15:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:24.921 [2024-12-10 11:15:31.577203] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:12:24.921 [2024-12-10 11:15:31.577409] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.180 [2024-12-10 11:15:31.771696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.180 [2024-12-10 11:15:31.877145] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.180 [2024-12-10 11:15:31.877213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.180 [2024-12-10 11:15:31.877234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.180 [2024-12-10 11:15:31.877273] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.180 [2024-12-10 11:15:31.877289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.180 [2024-12-10 11:15:31.878526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.438 [2024-12-10 11:15:32.059573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:26.005 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.005 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:26.005 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:26.005 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:26.005 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:26.005 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.005 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:26.263 [2024-12-10 11:15:32.870253] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:26.263 [2024-12-10 11:15:32.870730] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:26.263 [2024-12-10 11:15:32.870909] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:26.263 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:26.263 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b8abc3eb-038e-4230-92a9-15b84ee4016e 00:12:26.263 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b8abc3eb-038e-4230-92a9-15b84ee4016e 00:12:26.263 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:26.263 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:26.263 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:26.263 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:26.263 11:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:26.522 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b8abc3eb-038e-4230-92a9-15b84ee4016e -t 2000 00:12:26.780 [ 00:12:26.780 { 00:12:26.780 "name": "b8abc3eb-038e-4230-92a9-15b84ee4016e", 00:12:26.780 "aliases": [ 00:12:26.780 "lvs/lvol" 00:12:26.780 ], 00:12:26.780 "product_name": "Logical Volume", 00:12:26.780 "block_size": 4096, 00:12:26.780 "num_blocks": 38912, 00:12:26.780 "uuid": "b8abc3eb-038e-4230-92a9-15b84ee4016e", 00:12:26.780 "assigned_rate_limits": { 00:12:26.780 "rw_ios_per_sec": 0, 00:12:26.780 "rw_mbytes_per_sec": 0, 00:12:26.780 "r_mbytes_per_sec": 0, 00:12:26.780 "w_mbytes_per_sec": 0 00:12:26.780 }, 00:12:26.780 "claimed": false, 00:12:26.780 "zoned": false, 00:12:26.780 "supported_io_types": { 00:12:26.781 "read": true, 00:12:26.781 "write": true, 00:12:26.781 "unmap": true, 00:12:26.781 "flush": false, 00:12:26.781 "reset": true, 00:12:26.781 "nvme_admin": false, 00:12:26.781 "nvme_io": false, 00:12:26.781 "nvme_io_md": false, 00:12:26.781 "write_zeroes": true, 00:12:26.781 "zcopy": false, 00:12:26.781 "get_zone_info": false, 00:12:26.781 "zone_management": false, 00:12:26.781 "zone_append": false, 00:12:26.781 "compare": false, 00:12:26.781 "compare_and_write": false, 00:12:26.781 "abort": false, 00:12:26.781 "seek_hole": true, 00:12:26.781 "seek_data": true, 00:12:26.781 "copy": false, 00:12:26.781 "nvme_iov_md": false 00:12:26.781 }, 00:12:26.781 "driver_specific": { 00:12:26.781 "lvol": { 00:12:26.781 "lvol_store_uuid": "0f47fe25-724d-4556-8e9e-0875452d3120", 00:12:26.781 "base_bdev": "aio_bdev", 00:12:26.781 "thin_provision": false, 00:12:26.781 "num_allocated_clusters": 38, 00:12:26.781 "snapshot": false, 00:12:26.781 "clone": false, 00:12:26.781 "esnap_clone": false 00:12:26.781 } 00:12:26.781 } 00:12:26.781 } 00:12:26.781 ] 00:12:26.781 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:26.781 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:26.781 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f47fe25-724d-4556-8e9e-0875452d3120 00:12:27.039 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:27.039 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f47fe25-724d-4556-8e9e-0875452d3120 00:12:27.039 11:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:27.297 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:27.297 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:27.555 [2024-12-10 11:15:34.368475] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:27.814 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f47fe25-724d-4556-8e9e-0875452d3120 00:12:27.814 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:12:27.814 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f47fe25-724d-4556-8e9e-0875452d3120 00:12:27.814 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:27.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:27.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:27.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:27.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:27.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:27.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:27.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:27.815 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f47fe25-724d-4556-8e9e-0875452d3120 00:12:28.073 request: 00:12:28.073 { 00:12:28.073 "uuid": "0f47fe25-724d-4556-8e9e-0875452d3120", 00:12:28.073 "method": "bdev_lvol_get_lvstores", 00:12:28.073 "req_id": 1 00:12:28.073 } 00:12:28.073 Got JSON-RPC error response 00:12:28.073 response: 00:12:28.073 { 00:12:28.073 "code": -19, 00:12:28.073 "message": "No such device" 00:12:28.073 } 00:12:28.073 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:12:28.073 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:28.073 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:28.073 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:28.073 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:28.331 aio_bdev 00:12:28.331 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b8abc3eb-038e-4230-92a9-15b84ee4016e 00:12:28.331 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b8abc3eb-038e-4230-92a9-15b84ee4016e 00:12:28.331 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:28.331 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:28.331 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:28.331 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:28.331 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:28.589 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b8abc3eb-038e-4230-92a9-15b84ee4016e -t 2000 00:12:28.846 [ 00:12:28.846 { 00:12:28.846 "name": "b8abc3eb-038e-4230-92a9-15b84ee4016e", 00:12:28.846 "aliases": [ 00:12:28.846 "lvs/lvol" 00:12:28.846 ], 00:12:28.846 "product_name": "Logical Volume", 00:12:28.846 "block_size": 4096, 00:12:28.846 "num_blocks": 38912, 00:12:28.846 "uuid": "b8abc3eb-038e-4230-92a9-15b84ee4016e", 00:12:28.846 "assigned_rate_limits": { 00:12:28.846 "rw_ios_per_sec": 0, 00:12:28.846 "rw_mbytes_per_sec": 0, 00:12:28.846 "r_mbytes_per_sec": 0, 00:12:28.846 "w_mbytes_per_sec": 0 00:12:28.846 }, 00:12:28.846 "claimed": false, 00:12:28.846 "zoned": false, 00:12:28.846 "supported_io_types": { 00:12:28.846 "read": true, 00:12:28.846 "write": true, 00:12:28.846 "unmap": true, 00:12:28.846 "flush": false, 00:12:28.846 "reset": true, 00:12:28.846 "nvme_admin": false, 00:12:28.846 "nvme_io": false, 00:12:28.846 "nvme_io_md": false, 00:12:28.846 "write_zeroes": true, 00:12:28.846 "zcopy": false, 00:12:28.846 "get_zone_info": false, 00:12:28.846 "zone_management": false, 00:12:28.846 "zone_append": false, 00:12:28.846 "compare": false, 00:12:28.846 "compare_and_write": false, 00:12:28.846 "abort": false, 00:12:28.846 "seek_hole": true, 00:12:28.846 "seek_data": true, 00:12:28.846 "copy": false, 00:12:28.846 "nvme_iov_md": false 00:12:28.846 }, 00:12:28.846 "driver_specific": { 00:12:28.846 "lvol": { 00:12:28.846 "lvol_store_uuid": "0f47fe25-724d-4556-8e9e-0875452d3120", 00:12:28.846 "base_bdev": "aio_bdev", 00:12:28.846 "thin_provision": false, 00:12:28.846 "num_allocated_clusters": 38, 00:12:28.846 "snapshot": false, 00:12:28.846 "clone": false, 00:12:28.846 "esnap_clone": false 00:12:28.846 } 00:12:28.846 } 00:12:28.846 } 00:12:28.846 ] 00:12:28.846 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:28.847 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:28.847 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f47fe25-724d-4556-8e9e-0875452d3120 00:12:29.105 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:29.105 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:29.105 11:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f47fe25-724d-4556-8e9e-0875452d3120 00:12:29.363 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:29.363 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b8abc3eb-038e-4230-92a9-15b84ee4016e 00:12:29.621 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f47fe25-724d-4556-8e9e-0875452d3120 00:12:30.188 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:30.188 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:30.804 ************************************ 00:12:30.804 END TEST lvs_grow_dirty 00:12:30.804 ************************************ 00:12:30.804 00:12:30.804 real 0m23.209s 00:12:30.804 user 0m50.736s 00:12:30.804 sys 0m7.769s 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:30.804 nvmf_trace.0 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:30.804 rmmod nvme_tcp 00:12:30.804 rmmod nvme_fabrics 00:12:30.804 rmmod nvme_keyring 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 66610 ']' 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 66610 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 66610 ']' 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 66610 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.804 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66610 00:12:31.063 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:31.063 killing process with pid 66610 00:12:31.063 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:31.063 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66610' 00:12:31.063 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 66610 00:12:31.063 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 66610 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:31.998 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:32.257 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:32.257 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:32.257 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:32.257 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:32.257 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.257 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.257 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.257 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:12:32.257 00:12:32.257 real 0m47.172s 00:12:32.257 user 1m17.979s 00:12:32.257 sys 0m11.193s 00:12:32.257 ************************************ 00:12:32.257 END TEST nvmf_lvs_grow 00:12:32.257 ************************************ 00:12:32.257 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.257 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:32.257 11:15:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:32.257 11:15:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.257 11:15:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.257 11:15:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:32.257 ************************************ 00:12:32.257 START TEST nvmf_bdev_io_wait 00:12:32.257 ************************************ 00:12:32.257 11:15:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:32.257 * Looking for test storage... 00:12:32.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:32.257 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:32.257 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:32.257 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:32.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.517 --rc genhtml_branch_coverage=1 00:12:32.517 --rc genhtml_function_coverage=1 00:12:32.517 --rc genhtml_legend=1 00:12:32.517 --rc geninfo_all_blocks=1 00:12:32.517 --rc geninfo_unexecuted_blocks=1 00:12:32.517 00:12:32.517 ' 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:32.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.517 --rc genhtml_branch_coverage=1 00:12:32.517 --rc genhtml_function_coverage=1 00:12:32.517 --rc genhtml_legend=1 00:12:32.517 --rc geninfo_all_blocks=1 00:12:32.517 --rc geninfo_unexecuted_blocks=1 00:12:32.517 00:12:32.517 ' 00:12:32.517 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:32.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.517 --rc genhtml_branch_coverage=1 00:12:32.517 --rc genhtml_function_coverage=1 00:12:32.517 --rc genhtml_legend=1 00:12:32.517 --rc geninfo_all_blocks=1 00:12:32.518 --rc geninfo_unexecuted_blocks=1 00:12:32.518 00:12:32.518 ' 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:32.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.518 --rc genhtml_branch_coverage=1 00:12:32.518 --rc genhtml_function_coverage=1 00:12:32.518 --rc genhtml_legend=1 00:12:32.518 --rc geninfo_all_blocks=1 00:12:32.518 --rc geninfo_unexecuted_blocks=1 00:12:32.518 00:12:32.518 ' 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.518 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:32.518 Cannot find device "nvmf_init_br" 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:32.518 Cannot find device "nvmf_init_br2" 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:32.518 Cannot find device "nvmf_tgt_br" 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:32.518 Cannot find device "nvmf_tgt_br2" 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:32.518 Cannot find device "nvmf_init_br" 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:32.518 Cannot find device "nvmf_init_br2" 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:12:32.518 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:32.518 Cannot find device "nvmf_tgt_br" 00:12:32.519 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:12:32.519 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:32.519 Cannot find device "nvmf_tgt_br2" 00:12:32.519 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:12:32.519 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:32.519 Cannot find device "nvmf_br" 00:12:32.519 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:12:32.519 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:32.519 Cannot find device "nvmf_init_if" 00:12:32.519 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:12:32.519 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:32.519 Cannot find device "nvmf_init_if2" 00:12:32.519 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:12:32.519 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:32.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.519 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:12:32.519 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:32.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.519 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:12:32.519 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:32.519 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:32.778 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:32.778 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:12:32.778 00:12:32.778 --- 10.0.0.3 ping statistics --- 00:12:32.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.778 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:32.778 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:32.778 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:12:32.778 00:12:32.778 --- 10.0.0.4 ping statistics --- 00:12:32.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.778 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:32.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:32.778 00:12:32.778 --- 10.0.0.1 ping statistics --- 00:12:32.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.778 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:32.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:12:32.778 00:12:32.778 --- 10.0.0.2 ping statistics --- 00:12:32.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.778 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=67004 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 67004 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 67004 ']' 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.778 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.779 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.037 [2024-12-10 11:15:39.714617] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:12:33.037 [2024-12-10 11:15:39.714777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.296 [2024-12-10 11:15:39.908752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.296 [2024-12-10 11:15:40.053239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.296 [2024-12-10 11:15:40.053316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.296 [2024-12-10 11:15:40.053341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.296 [2024-12-10 11:15:40.053381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.296 [2024-12-10 11:15:40.053399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.296 [2024-12-10 11:15:40.055529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.296 [2024-12-10 11:15:40.055595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.296 [2024-12-10 11:15:40.055738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.296 [2024-12-10 11:15:40.055752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:34.232 [2024-12-10 11:15:40.961210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:34.232 [2024-12-10 11:15:40.982468] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.232 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:34.491 Malloc0 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:34.491 [2024-12-10 11:15:41.084465] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67039 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:34.491 { 00:12:34.491 "params": { 00:12:34.491 "name": "Nvme$subsystem", 00:12:34.491 "trtype": "$TEST_TRANSPORT", 00:12:34.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:34.491 "adrfam": "ipv4", 00:12:34.491 "trsvcid": "$NVMF_PORT", 00:12:34.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:34.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:34.491 "hdgst": ${hdgst:-false}, 00:12:34.491 "ddgst": ${ddgst:-false} 00:12:34.491 }, 00:12:34.491 "method": "bdev_nvme_attach_controller" 00:12:34.491 } 00:12:34.491 EOF 00:12:34.491 )") 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67041 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:34.491 { 00:12:34.491 "params": { 00:12:34.491 "name": "Nvme$subsystem", 00:12:34.491 "trtype": "$TEST_TRANSPORT", 00:12:34.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:34.491 "adrfam": "ipv4", 00:12:34.491 "trsvcid": "$NVMF_PORT", 00:12:34.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:34.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:34.491 "hdgst": ${hdgst:-false}, 00:12:34.491 "ddgst": ${ddgst:-false} 00:12:34.491 }, 00:12:34.491 "method": "bdev_nvme_attach_controller" 00:12:34.491 } 00:12:34.491 EOF 00:12:34.491 )") 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67045 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67049 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:34.491 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:34.491 { 00:12:34.491 "params": { 00:12:34.491 "name": "Nvme$subsystem", 00:12:34.491 "trtype": "$TEST_TRANSPORT", 00:12:34.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:34.492 "adrfam": "ipv4", 00:12:34.492 "trsvcid": "$NVMF_PORT", 00:12:34.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:34.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:34.492 "hdgst": ${hdgst:-false}, 00:12:34.492 "ddgst": ${ddgst:-false} 00:12:34.492 }, 00:12:34.492 "method": "bdev_nvme_attach_controller" 00:12:34.492 } 00:12:34.492 EOF 00:12:34.492 )") 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:34.492 "params": { 00:12:34.492 "name": "Nvme1", 00:12:34.492 "trtype": "tcp", 00:12:34.492 "traddr": "10.0.0.3", 00:12:34.492 "adrfam": "ipv4", 00:12:34.492 "trsvcid": "4420", 00:12:34.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:34.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:34.492 "hdgst": false, 00:12:34.492 "ddgst": false 00:12:34.492 }, 00:12:34.492 "method": "bdev_nvme_attach_controller" 00:12:34.492 }' 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:34.492 { 00:12:34.492 "params": { 00:12:34.492 "name": "Nvme$subsystem", 00:12:34.492 "trtype": "$TEST_TRANSPORT", 00:12:34.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:34.492 "adrfam": "ipv4", 00:12:34.492 "trsvcid": "$NVMF_PORT", 00:12:34.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:34.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:34.492 "hdgst": ${hdgst:-false}, 00:12:34.492 "ddgst": ${ddgst:-false} 00:12:34.492 }, 00:12:34.492 "method": "bdev_nvme_attach_controller" 00:12:34.492 } 00:12:34.492 EOF 00:12:34.492 )") 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:34.492 "params": { 00:12:34.492 "name": "Nvme1", 00:12:34.492 "trtype": "tcp", 00:12:34.492 "traddr": "10.0.0.3", 00:12:34.492 "adrfam": "ipv4", 00:12:34.492 "trsvcid": "4420", 00:12:34.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:34.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:34.492 "hdgst": false, 00:12:34.492 "ddgst": false 00:12:34.492 }, 00:12:34.492 "method": "bdev_nvme_attach_controller" 00:12:34.492 }' 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:34.492 "params": { 00:12:34.492 "name": "Nvme1", 00:12:34.492 "trtype": "tcp", 00:12:34.492 "traddr": "10.0.0.3", 00:12:34.492 "adrfam": "ipv4", 00:12:34.492 "trsvcid": "4420", 00:12:34.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:34.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:34.492 "hdgst": false, 00:12:34.492 "ddgst": false 00:12:34.492 }, 00:12:34.492 "method": "bdev_nvme_attach_controller" 00:12:34.492 }' 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:34.492 "params": { 00:12:34.492 "name": "Nvme1", 00:12:34.492 "trtype": "tcp", 00:12:34.492 "traddr": "10.0.0.3", 00:12:34.492 "adrfam": "ipv4", 00:12:34.492 "trsvcid": "4420", 00:12:34.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:34.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:34.492 "hdgst": false, 00:12:34.492 "ddgst": false 00:12:34.492 }, 00:12:34.492 "method": "bdev_nvme_attach_controller" 00:12:34.492 }' 00:12:34.492 11:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67039 00:12:34.492 [2024-12-10 11:15:41.211861] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:12:34.492 [2024-12-10 11:15:41.212202] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:34.492 [2024-12-10 11:15:41.233193] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:12:34.492 [2024-12-10 11:15:41.233611] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:34.492 [2024-12-10 11:15:41.237319] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:12:34.492 [2024-12-10 11:15:41.237528] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:34.492 [2024-12-10 11:15:41.238439] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:12:34.492 [2024-12-10 11:15:41.238581] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:34.752 [2024-12-10 11:15:41.468826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.752 [2024-12-10 11:15:41.473666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.752 [2024-12-10 11:15:41.523256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.752 [2024-12-10 11:15:41.566459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.752 [2024-12-10 11:15:41.570109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:34.752 [2024-12-10 11:15:41.572617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:35.011 [2024-12-10 11:15:41.620296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:35.011 [2024-12-10 11:15:41.683505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:12:35.011 [2024-12-10 11:15:41.737134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:35.011 [2024-12-10 11:15:41.739527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:35.011 [2024-12-10 11:15:41.827237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:35.269 [2024-12-10 11:15:41.874394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:35.269 Running I/O for 1 seconds... 00:12:35.269 Running I/O for 1 seconds... 00:12:35.269 Running I/O for 1 seconds... 00:12:35.269 Running I/O for 1 seconds... 00:12:36.203 8289.00 IOPS, 32.38 MiB/s [2024-12-10T11:15:43.029Z] 5956.00 IOPS, 23.27 MiB/s 00:12:36.203 Latency(us) 00:12:36.203 [2024-12-10T11:15:43.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.203 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:36.203 Nvme1n1 : 1.01 8346.09 32.60 0.00 0.00 15260.11 4349.21 22163.08 00:12:36.203 [2024-12-10T11:15:43.029Z] =================================================================================================================== 00:12:36.203 [2024-12-10T11:15:43.029Z] Total : 8346.09 32.60 0.00 0.00 15260.11 4349.21 22163.08 00:12:36.203 00:12:36.203 Latency(us) 00:12:36.203 [2024-12-10T11:15:43.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.203 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:36.203 Nvme1n1 : 1.02 6000.68 23.44 0.00 0.00 21178.04 7804.74 31218.97 00:12:36.203 [2024-12-10T11:15:43.029Z] =================================================================================================================== 00:12:36.203 [2024-12-10T11:15:43.029Z] Total : 6000.68 23.44 0.00 0.00 21178.04 7804.74 31218.97 00:12:36.461 133056.00 IOPS, 519.75 MiB/s 00:12:36.461 Latency(us) 00:12:36.461 [2024-12-10T11:15:43.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.461 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:36.461 Nvme1n1 : 1.00 132752.30 518.56 0.00 0.00 959.25 463.59 2263.97 00:12:36.461 [2024-12-10T11:15:43.287Z] =================================================================================================================== 00:12:36.461 [2024-12-10T11:15:43.287Z] Total : 132752.30 518.56 0.00 0.00 959.25 463.59 2263.97 00:12:36.461 6893.00 IOPS, 26.93 MiB/s 00:12:36.461 Latency(us) 00:12:36.461 [2024-12-10T11:15:43.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.461 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:36.461 Nvme1n1 : 1.01 6956.74 27.17 0.00 0.00 18291.95 9532.51 31695.59 00:12:36.461 [2024-12-10T11:15:43.287Z] =================================================================================================================== 00:12:36.461 [2024-12-10T11:15:43.287Z] Total : 6956.74 27.17 0.00 0.00 18291.95 9532.51 31695.59 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67041 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67045 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67049 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.026 rmmod nvme_tcp 00:12:37.026 rmmod nvme_fabrics 00:12:37.026 rmmod nvme_keyring 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 67004 ']' 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 67004 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 67004 ']' 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 67004 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.026 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67004 00:12:37.334 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.334 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.334 killing process with pid 67004 00:12:37.334 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67004' 00:12:37.334 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 67004 00:12:37.334 11:15:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 67004 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:38.269 11:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:38.269 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:38.269 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:38.269 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:38.269 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:38.269 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.269 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.269 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:12:38.527 00:12:38.527 real 0m6.117s 00:12:38.527 user 0m25.805s 00:12:38.527 sys 0m2.724s 00:12:38.527 ************************************ 00:12:38.527 END TEST nvmf_bdev_io_wait 00:12:38.527 ************************************ 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:38.527 ************************************ 00:12:38.527 START TEST nvmf_queue_depth 00:12:38.527 ************************************ 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:38.527 * Looking for test storage... 00:12:38.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:38.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.527 --rc genhtml_branch_coverage=1 00:12:38.527 --rc genhtml_function_coverage=1 00:12:38.527 --rc genhtml_legend=1 00:12:38.527 --rc geninfo_all_blocks=1 00:12:38.527 --rc geninfo_unexecuted_blocks=1 00:12:38.527 00:12:38.527 ' 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:38.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.527 --rc genhtml_branch_coverage=1 00:12:38.527 --rc genhtml_function_coverage=1 00:12:38.527 --rc genhtml_legend=1 00:12:38.527 --rc geninfo_all_blocks=1 00:12:38.527 --rc geninfo_unexecuted_blocks=1 00:12:38.527 00:12:38.527 ' 00:12:38.527 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:38.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.527 --rc genhtml_branch_coverage=1 00:12:38.528 --rc genhtml_function_coverage=1 00:12:38.528 --rc genhtml_legend=1 00:12:38.528 --rc geninfo_all_blocks=1 00:12:38.528 --rc geninfo_unexecuted_blocks=1 00:12:38.528 00:12:38.528 ' 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:38.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.528 --rc genhtml_branch_coverage=1 00:12:38.528 --rc genhtml_function_coverage=1 00:12:38.528 --rc genhtml_legend=1 00:12:38.528 --rc geninfo_all_blocks=1 00:12:38.528 --rc geninfo_unexecuted_blocks=1 00:12:38.528 00:12:38.528 ' 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.528 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:38.528 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:38.786 Cannot find device "nvmf_init_br" 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:12:38.786 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:38.786 Cannot find device "nvmf_init_br2" 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:38.787 Cannot find device "nvmf_tgt_br" 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:38.787 Cannot find device "nvmf_tgt_br2" 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:38.787 Cannot find device "nvmf_init_br" 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:38.787 Cannot find device "nvmf_init_br2" 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:38.787 Cannot find device "nvmf_tgt_br" 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:38.787 Cannot find device "nvmf_tgt_br2" 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:38.787 Cannot find device "nvmf_br" 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:38.787 Cannot find device "nvmf_init_if" 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:38.787 Cannot find device "nvmf_init_if2" 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:38.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:38.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:38.787 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:39.046 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:39.046 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:39.046 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:39.046 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:39.046 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:39.046 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:39.046 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:39.046 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:39.047 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:39.047 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:12:39.047 00:12:39.047 --- 10.0.0.3 ping statistics --- 00:12:39.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.047 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:39.047 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:39.047 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:12:39.047 00:12:39.047 --- 10.0.0.4 ping statistics --- 00:12:39.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.047 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:39.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:12:39.047 00:12:39.047 --- 10.0.0.1 ping statistics --- 00:12:39.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.047 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:39.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:12:39.047 00:12:39.047 --- 10.0.0.2 ping statistics --- 00:12:39.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.047 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=67351 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 67351 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67351 ']' 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.047 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.306 [2024-12-10 11:15:45.877054] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:12:39.306 [2024-12-10 11:15:45.877485] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.306 [2024-12-10 11:15:46.071311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.564 [2024-12-10 11:15:46.200838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.564 [2024-12-10 11:15:46.201068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.564 [2024-12-10 11:15:46.201272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.564 [2024-12-10 11:15:46.201507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.564 [2024-12-10 11:15:46.201683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.564 [2024-12-10 11:15:46.203313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.823 [2024-12-10 11:15:46.435999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:40.081 11:15:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.081 11:15:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:12:40.081 11:15:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:40.081 11:15:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:40.081 11:15:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.081 11:15:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.081 11:15:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:40.081 11:15:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.082 11:15:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.082 [2024-12-10 11:15:46.898986] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.082 11:15:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.082 11:15:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:40.082 11:15:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.082 11:15:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.340 Malloc0 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.340 [2024-12-10 11:15:47.020937] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:40.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67383 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67383 /var/tmp/bdevperf.sock 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67383 ']' 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.340 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.340 [2024-12-10 11:15:47.137907] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:12:40.340 [2024-12-10 11:15:47.138072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67383 ] 00:12:40.599 [2024-12-10 11:15:47.325340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.857 [2024-12-10 11:15:47.454384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.857 [2024-12-10 11:15:47.673476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:41.428 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.428 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:12:41.428 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:41.428 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.428 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:41.687 NVMe0n1 00:12:41.687 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.687 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:41.687 Running I/O for 10 seconds... 00:12:43.995 4998.00 IOPS, 19.52 MiB/s [2024-12-10T11:15:51.755Z] 5149.00 IOPS, 20.11 MiB/s [2024-12-10T11:15:52.704Z] 5327.33 IOPS, 20.81 MiB/s [2024-12-10T11:15:53.640Z] 5221.00 IOPS, 20.39 MiB/s [2024-12-10T11:15:54.575Z] 5328.20 IOPS, 20.81 MiB/s [2024-12-10T11:15:55.509Z] 5406.67 IOPS, 21.12 MiB/s [2024-12-10T11:15:56.456Z] 5426.14 IOPS, 21.20 MiB/s [2024-12-10T11:15:57.830Z] 5497.25 IOPS, 21.47 MiB/s [2024-12-10T11:15:58.765Z] 5513.56 IOPS, 21.54 MiB/s [2024-12-10T11:15:58.765Z] 5608.40 IOPS, 21.91 MiB/s 00:12:51.939 Latency(us) 00:12:51.939 [2024-12-10T11:15:58.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.939 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:51.939 Verification LBA range: start 0x0 length 0x4000 00:12:51.939 NVMe0n1 : 10.12 5630.94 22.00 0.00 0.00 180608.83 28240.06 227826.97 00:12:51.939 [2024-12-10T11:15:58.765Z] =================================================================================================================== 00:12:51.939 [2024-12-10T11:15:58.765Z] Total : 5630.94 22.00 0.00 0.00 180608.83 28240.06 227826.97 00:12:51.939 { 00:12:51.939 "results": [ 00:12:51.939 { 00:12:51.939 "job": "NVMe0n1", 00:12:51.939 "core_mask": "0x1", 00:12:51.939 "workload": "verify", 00:12:51.939 "status": "finished", 00:12:51.939 "verify_range": { 00:12:51.939 "start": 0, 00:12:51.939 "length": 16384 00:12:51.939 }, 00:12:51.939 "queue_depth": 1024, 00:12:51.939 "io_size": 4096, 00:12:51.939 "runtime": 10.1182, 00:12:51.939 "iops": 5630.942262457749, 00:12:51.939 "mibps": 21.995868212725583, 00:12:51.939 "io_failed": 0, 00:12:51.939 "io_timeout": 0, 00:12:51.939 "avg_latency_us": 180608.82642996532, 00:12:51.939 "min_latency_us": 28240.05818181818, 00:12:51.939 "max_latency_us": 227826.96727272728 00:12:51.939 } 00:12:51.939 ], 00:12:51.939 "core_count": 1 00:12:51.939 } 00:12:51.939 11:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67383 00:12:51.939 11:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67383 ']' 00:12:51.939 11:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67383 00:12:51.939 11:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:51.939 11:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.939 11:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67383 00:12:51.939 11:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.939 11:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.939 11:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67383' 00:12:51.939 killing process with pid 67383 00:12:51.939 11:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67383 00:12:51.939 Received shutdown signal, test time was about 10.000000 seconds 00:12:51.939 00:12:51.939 Latency(us) 00:12:51.939 [2024-12-10T11:15:58.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.939 [2024-12-10T11:15:58.765Z] =================================================================================================================== 00:12:51.939 [2024-12-10T11:15:58.765Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:51.939 11:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67383 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:52.894 rmmod nvme_tcp 00:12:52.894 rmmod nvme_fabrics 00:12:52.894 rmmod nvme_keyring 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 67351 ']' 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 67351 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67351 ']' 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67351 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67351 00:12:52.894 killing process with pid 67351 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67351' 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67351 00:12:52.894 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67351 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:54.272 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:54.273 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:54.273 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:54.273 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:54.273 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:54.273 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:54.273 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.273 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.273 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.273 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:12:54.273 00:12:54.273 real 0m15.847s 00:12:54.273 user 0m26.444s 00:12:54.273 sys 0m2.485s 00:12:54.273 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.273 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:54.273 ************************************ 00:12:54.273 END TEST nvmf_queue_depth 00:12:54.273 ************************************ 00:12:54.273 11:16:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:54.273 11:16:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:54.273 11:16:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.273 11:16:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:54.273 ************************************ 00:12:54.273 START TEST nvmf_target_multipath 00:12:54.273 ************************************ 00:12:54.273 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:54.532 * Looking for test storage... 00:12:54.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:54.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.532 --rc genhtml_branch_coverage=1 00:12:54.532 --rc genhtml_function_coverage=1 00:12:54.532 --rc genhtml_legend=1 00:12:54.532 --rc geninfo_all_blocks=1 00:12:54.532 --rc geninfo_unexecuted_blocks=1 00:12:54.532 00:12:54.532 ' 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:54.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.532 --rc genhtml_branch_coverage=1 00:12:54.532 --rc genhtml_function_coverage=1 00:12:54.532 --rc genhtml_legend=1 00:12:54.532 --rc geninfo_all_blocks=1 00:12:54.532 --rc geninfo_unexecuted_blocks=1 00:12:54.532 00:12:54.532 ' 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:54.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.532 --rc genhtml_branch_coverage=1 00:12:54.532 --rc genhtml_function_coverage=1 00:12:54.532 --rc genhtml_legend=1 00:12:54.532 --rc geninfo_all_blocks=1 00:12:54.532 --rc geninfo_unexecuted_blocks=1 00:12:54.532 00:12:54.532 ' 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:54.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.532 --rc genhtml_branch_coverage=1 00:12:54.532 --rc genhtml_function_coverage=1 00:12:54.532 --rc genhtml_legend=1 00:12:54.532 --rc geninfo_all_blocks=1 00:12:54.532 --rc geninfo_unexecuted_blocks=1 00:12:54.532 00:12:54.532 ' 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.532 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:54.533 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:54.533 Cannot find device "nvmf_init_br" 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:54.533 Cannot find device "nvmf_init_br2" 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:54.533 Cannot find device "nvmf_tgt_br" 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:54.533 Cannot find device "nvmf_tgt_br2" 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:54.533 Cannot find device "nvmf_init_br" 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:12:54.533 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:54.792 Cannot find device "nvmf_init_br2" 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:54.792 Cannot find device "nvmf_tgt_br" 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:54.792 Cannot find device "nvmf_tgt_br2" 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:54.792 Cannot find device "nvmf_br" 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:54.792 Cannot find device "nvmf_init_if" 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:54.792 Cannot find device "nvmf_init_if2" 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:54.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:54.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:54.792 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:55.051 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:55.051 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.133 ms 00:12:55.051 00:12:55.051 --- 10.0.0.3 ping statistics --- 00:12:55.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.051 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:55.051 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:55.051 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:12:55.051 00:12:55.051 --- 10.0.0.4 ping statistics --- 00:12:55.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.051 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:55.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:12:55.051 00:12:55.051 --- 10.0.0.1 ping statistics --- 00:12:55.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.051 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:55.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:12:55.051 00:12:55.051 --- 10.0.0.2 ping statistics --- 00:12:55.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.051 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=67779 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 67779 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 67779 ']' 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.051 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:55.052 [2024-12-10 11:16:01.816370] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:12:55.052 [2024-12-10 11:16:01.816552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.310 [2024-12-10 11:16:01.995513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.310 [2024-12-10 11:16:02.133210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.310 [2024-12-10 11:16:02.133304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.310 [2024-12-10 11:16:02.133328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.310 [2024-12-10 11:16:02.133343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.310 [2024-12-10 11:16:02.133387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.568 [2024-12-10 11:16:02.135587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.568 [2024-12-10 11:16:02.135771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.568 [2024-12-10 11:16:02.135876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.568 [2024-12-10 11:16:02.136053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.568 [2024-12-10 11:16:02.360791] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:56.135 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.135 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:12:56.135 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:56.135 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:56.135 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:56.135 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.135 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:56.394 [2024-12-10 11:16:03.111171] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.394 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:12:56.961 Malloc0 00:12:56.961 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:12:57.219 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:57.478 11:16:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:57.737 [2024-12-10 11:16:04.441161] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:57.737 11:16:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:12:57.995 [2024-12-10 11:16:04.765561] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:12:57.995 11:16:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:12:58.253 11:16:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:12:58.254 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.254 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:12:58.254 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.254 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:58.254 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67883 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:00.783 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:13:00.783 [global] 00:13:00.783 thread=1 00:13:00.783 invalidate=1 00:13:00.783 rw=randrw 00:13:00.783 time_based=1 00:13:00.783 runtime=6 00:13:00.783 ioengine=libaio 00:13:00.783 direct=1 00:13:00.783 bs=4096 00:13:00.783 iodepth=128 00:13:00.783 norandommap=0 00:13:00.783 numjobs=1 00:13:00.783 00:13:00.783 verify_dump=1 00:13:00.783 verify_backlog=512 00:13:00.783 verify_state_save=0 00:13:00.783 do_verify=1 00:13:00.783 verify=crc32c-intel 00:13:00.783 [job0] 00:13:00.783 filename=/dev/nvme0n1 00:13:00.783 Could not set queue depth (nvme0n1) 00:13:00.783 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:00.783 fio-3.35 00:13:00.783 Starting 1 thread 00:13:01.349 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:01.957 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:13:01.957 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:13:01.957 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:01.957 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:01.957 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:01.957 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:01.957 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:01.957 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:13:01.957 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:01.957 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:01.957 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:01.957 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:01.957 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:01.957 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:02.549 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:13:02.807 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:13:02.807 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:02.807 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:02.807 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:02.807 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:02.807 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:02.807 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:13:02.807 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:02.807 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:02.807 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:02.807 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:02.807 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:02.807 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67883 00:13:06.992 00:13:06.992 job0: (groupid=0, jobs=1): err= 0: pid=67904: Tue Dec 10 11:16:13 2024 00:13:06.992 read: IOPS=7644, BW=29.9MiB/s (31.3MB/s)(179MiB/6002msec) 00:13:06.992 slat (usec): min=3, max=8644, avg=76.55, stdev=319.86 00:13:06.992 clat (usec): min=1534, max=23979, avg=11303.25, stdev=2501.12 00:13:06.992 lat (usec): min=2403, max=23992, avg=11379.79, stdev=2510.90 00:13:06.992 clat percentiles (usec): 00:13:06.992 | 1.00th=[ 5669], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[ 9765], 00:13:06.992 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10814], 60.00th=[11207], 00:13:06.992 | 70.00th=[11600], 80.00th=[12649], 90.00th=[14877], 95.00th=[16319], 00:13:06.992 | 99.00th=[19268], 99.50th=[20841], 99.90th=[22676], 99.95th=[22938], 00:13:06.992 | 99.99th=[23200] 00:13:06.992 bw ( KiB/s): min= 4944, max=22384, per=54.90%, avg=16787.64, stdev=4780.49, samples=11 00:13:06.992 iops : min= 1236, max= 5596, avg=4196.91, stdev=1195.12, samples=11 00:13:06.992 write: IOPS=4550, BW=17.8MiB/s (18.6MB/s)(97.9MiB/5510msec); 0 zone resets 00:13:06.992 slat (usec): min=13, max=4654, avg=87.34, stdev=237.48 00:13:06.992 clat (usec): min=1474, max=25381, avg=9689.72, stdev=2110.65 00:13:06.992 lat (usec): min=1503, max=25416, avg=9777.06, stdev=2121.53 00:13:06.992 clat percentiles (usec): 00:13:06.992 | 1.00th=[ 4228], 5.00th=[ 5669], 10.00th=[ 7570], 20.00th=[ 8586], 00:13:06.992 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9896], 00:13:06.992 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11994], 95.00th=[13566], 00:13:06.992 | 99.00th=[15664], 99.50th=[17171], 99.90th=[19530], 99.95th=[21103], 00:13:06.992 | 99.99th=[24773] 00:13:06.992 bw ( KiB/s): min= 5328, max=21904, per=91.93%, avg=16732.36, stdev=4634.12, samples=11 00:13:06.992 iops : min= 1332, max= 5476, avg=4183.09, stdev=1158.53, samples=11 00:13:06.992 lat (msec) : 2=0.01%, 4=0.28%, 10=39.84%, 20=59.42%, 50=0.45% 00:13:06.992 cpu : usr=4.97%, sys=18.21%, ctx=4061, majf=0, minf=90 00:13:06.992 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:06.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:06.992 issued rwts: total=45881,25072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:06.992 00:13:06.992 Run status group 0 (all jobs): 00:13:06.992 READ: bw=29.9MiB/s (31.3MB/s), 29.9MiB/s-29.9MiB/s (31.3MB/s-31.3MB/s), io=179MiB (188MB), run=6002-6002msec 00:13:06.992 WRITE: bw=17.8MiB/s (18.6MB/s), 17.8MiB/s-17.8MiB/s (18.6MB/s-18.6MB/s), io=97.9MiB (103MB), run=5510-5510msec 00:13:06.992 00:13:06.992 Disk stats (read/write): 00:13:06.992 nvme0n1: ios=44693/25072, merge=0/0, ticks=489567/229773, in_queue=719340, util=98.63% 00:13:06.992 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:13:06.992 11:16:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67986 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:07.559 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:13:07.559 [global] 00:13:07.559 thread=1 00:13:07.559 invalidate=1 00:13:07.559 rw=randrw 00:13:07.559 time_based=1 00:13:07.559 runtime=6 00:13:07.559 ioengine=libaio 00:13:07.559 direct=1 00:13:07.559 bs=4096 00:13:07.559 iodepth=128 00:13:07.559 norandommap=0 00:13:07.559 numjobs=1 00:13:07.559 00:13:07.559 verify_dump=1 00:13:07.559 verify_backlog=512 00:13:07.559 verify_state_save=0 00:13:07.559 do_verify=1 00:13:07.559 verify=crc32c-intel 00:13:07.559 [job0] 00:13:07.559 filename=/dev/nvme0n1 00:13:07.559 Could not set queue depth (nvme0n1) 00:13:07.559 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:07.559 fio-3.35 00:13:07.559 Starting 1 thread 00:13:08.496 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:08.753 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:13:09.011 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:13:09.011 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:09.011 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:09.011 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:09.011 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:09.011 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:09.011 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:13:09.011 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:09.011 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:09.011 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:09.011 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:09.011 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:09.011 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:09.269 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:13:09.834 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:13:09.834 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:09.834 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:09.834 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:09.834 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:09.834 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:09.834 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:13:09.834 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:09.834 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:09.834 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:09.834 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:09.834 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:09.834 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67986 00:13:14.021 00:13:14.021 job0: (groupid=0, jobs=1): err= 0: pid=68007: Tue Dec 10 11:16:20 2024 00:13:14.021 read: IOPS=9396, BW=36.7MiB/s (38.5MB/s)(220MiB/6004msec) 00:13:14.021 slat (usec): min=4, max=7573, avg=55.84, stdev=239.21 00:13:14.021 clat (usec): min=379, max=21773, avg=9558.82, stdev=2873.10 00:13:14.021 lat (usec): min=399, max=21785, avg=9614.66, stdev=2893.73 00:13:14.021 clat percentiles (usec): 00:13:14.021 | 1.00th=[ 1614], 5.00th=[ 4686], 10.00th=[ 5604], 20.00th=[ 7177], 00:13:14.021 | 30.00th=[ 8586], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10159], 00:13:14.021 | 70.00th=[10683], 80.00th=[11338], 90.00th=[12649], 95.00th=[14746], 00:13:14.021 | 99.00th=[16581], 99.50th=[17433], 99.90th=[19792], 99.95th=[20055], 00:13:14.021 | 99.99th=[21890] 00:13:14.021 bw ( KiB/s): min= 3816, max=35752, per=50.94%, avg=19146.91, stdev=9770.74, samples=11 00:13:14.021 iops : min= 954, max= 8938, avg=4786.73, stdev=2442.68, samples=11 00:13:14.021 write: IOPS=5778, BW=22.6MiB/s (23.7MB/s)(113MiB/5007msec); 0 zone resets 00:13:14.021 slat (usec): min=14, max=2204, avg=61.96, stdev=155.14 00:13:14.021 clat (usec): min=333, max=19337, avg=7706.72, stdev=2447.76 00:13:14.021 lat (usec): min=389, max=19378, avg=7768.68, stdev=2467.94 00:13:14.021 clat percentiles (usec): 00:13:14.021 | 1.00th=[ 2638], 5.00th=[ 3818], 10.00th=[ 4359], 20.00th=[ 5080], 00:13:14.021 | 30.00th=[ 5932], 40.00th=[ 7111], 50.00th=[ 8586], 60.00th=[ 8979], 00:13:14.021 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10683], 00:13:14.021 | 99.00th=[13566], 99.50th=[14484], 99.90th=[16057], 99.95th=[16712], 00:13:14.021 | 99.99th=[18482] 00:13:14.021 bw ( KiB/s): min= 4096, max=35784, per=82.98%, avg=19181.09, stdev=9622.99, samples=11 00:13:14.021 iops : min= 1024, max= 8946, avg=4795.27, stdev=2405.75, samples=11 00:13:14.021 lat (usec) : 500=0.02%, 750=0.05%, 1000=0.13% 00:13:14.021 lat (msec) : 2=0.77%, 4=3.18%, 10=60.07%, 20=35.72%, 50=0.05% 00:13:14.021 cpu : usr=5.63%, sys=22.47%, ctx=4980, majf=0, minf=114 00:13:14.021 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:14.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:14.021 issued rwts: total=56417,28934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:14.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:14.021 00:13:14.021 Run status group 0 (all jobs): 00:13:14.021 READ: bw=36.7MiB/s (38.5MB/s), 36.7MiB/s-36.7MiB/s (38.5MB/s-38.5MB/s), io=220MiB (231MB), run=6004-6004msec 00:13:14.022 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=113MiB (119MB), run=5007-5007msec 00:13:14.022 00:13:14.022 Disk stats (read/write): 00:13:14.022 nvme0n1: ios=55780/28422, merge=0/0, ticks=511801/204622, in_queue=716423, util=98.70% 00:13:14.022 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:14.022 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.022 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:13:14.022 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:14.022 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.022 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:14.022 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.022 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:13:14.022 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:14.280 rmmod nvme_tcp 00:13:14.280 rmmod nvme_fabrics 00:13:14.280 rmmod nvme_keyring 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 67779 ']' 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 67779 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 67779 ']' 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 67779 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67779 00:13:14.280 killing process with pid 67779 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67779' 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 67779 00:13:14.280 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 67779 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:15.656 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:13:15.964 00:13:15.964 real 0m21.497s 00:13:15.964 user 1m19.358s 00:13:15.964 sys 0m9.542s 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:15.964 ************************************ 00:13:15.964 END TEST nvmf_target_multipath 00:13:15.964 ************************************ 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:15.964 ************************************ 00:13:15.964 START TEST nvmf_zcopy 00:13:15.964 ************************************ 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:15.964 * Looking for test storage... 00:13:15.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.964 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:15.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.965 --rc genhtml_branch_coverage=1 00:13:15.965 --rc genhtml_function_coverage=1 00:13:15.965 --rc genhtml_legend=1 00:13:15.965 --rc geninfo_all_blocks=1 00:13:15.965 --rc geninfo_unexecuted_blocks=1 00:13:15.965 00:13:15.965 ' 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:15.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.965 --rc genhtml_branch_coverage=1 00:13:15.965 --rc genhtml_function_coverage=1 00:13:15.965 --rc genhtml_legend=1 00:13:15.965 --rc geninfo_all_blocks=1 00:13:15.965 --rc geninfo_unexecuted_blocks=1 00:13:15.965 00:13:15.965 ' 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:15.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.965 --rc genhtml_branch_coverage=1 00:13:15.965 --rc genhtml_function_coverage=1 00:13:15.965 --rc genhtml_legend=1 00:13:15.965 --rc geninfo_all_blocks=1 00:13:15.965 --rc geninfo_unexecuted_blocks=1 00:13:15.965 00:13:15.965 ' 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:15.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.965 --rc genhtml_branch_coverage=1 00:13:15.965 --rc genhtml_function_coverage=1 00:13:15.965 --rc genhtml_legend=1 00:13:15.965 --rc geninfo_all_blocks=1 00:13:15.965 --rc geninfo_unexecuted_blocks=1 00:13:15.965 00:13:15.965 ' 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.965 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:16.224 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:16.224 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:16.225 Cannot find device "nvmf_init_br" 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:16.225 Cannot find device "nvmf_init_br2" 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:16.225 Cannot find device "nvmf_tgt_br" 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:16.225 Cannot find device "nvmf_tgt_br2" 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:16.225 Cannot find device "nvmf_init_br" 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:16.225 Cannot find device "nvmf_init_br2" 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:16.225 Cannot find device "nvmf_tgt_br" 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:16.225 Cannot find device "nvmf_tgt_br2" 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:16.225 Cannot find device "nvmf_br" 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:16.225 Cannot find device "nvmf_init_if" 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:16.225 Cannot find device "nvmf_init_if2" 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:16.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:16.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:16.225 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:16.225 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:16.225 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:16.225 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:16.225 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:16.225 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:16.225 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:16.225 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:16.225 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:16.225 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:16.483 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:16.483 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:16.483 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:16.483 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:16.483 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:16.483 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:16.484 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:16.484 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:13:16.484 00:13:16.484 --- 10.0.0.3 ping statistics --- 00:13:16.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.484 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:16.484 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:16.484 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.089 ms 00:13:16.484 00:13:16.484 --- 10.0.0.4 ping statistics --- 00:13:16.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.484 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:16.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:13:16.484 00:13:16.484 --- 10.0.0.1 ping statistics --- 00:13:16.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.484 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:16.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:13:16.484 00:13:16.484 --- 10.0.0.2 ping statistics --- 00:13:16.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.484 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=68343 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 68343 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 68343 ']' 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.484 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:16.742 [2024-12-10 11:16:23.346784] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:13:16.742 [2024-12-10 11:16:23.346984] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.742 [2024-12-10 11:16:23.535490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.001 [2024-12-10 11:16:23.637973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.001 [2024-12-10 11:16:23.638037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.001 [2024-12-10 11:16:23.638056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.001 [2024-12-10 11:16:23.638080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.001 [2024-12-10 11:16:23.638095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.001 [2024-12-10 11:16:23.639274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.001 [2024-12-10 11:16:23.825721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.568 [2024-12-10 11:16:24.271816] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.568 [2024-12-10 11:16:24.292149] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.568 malloc0 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:17.568 { 00:13:17.568 "params": { 00:13:17.568 "name": "Nvme$subsystem", 00:13:17.568 "trtype": "$TEST_TRANSPORT", 00:13:17.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:17.568 "adrfam": "ipv4", 00:13:17.568 "trsvcid": "$NVMF_PORT", 00:13:17.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:17.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:17.568 "hdgst": ${hdgst:-false}, 00:13:17.568 "ddgst": ${ddgst:-false} 00:13:17.568 }, 00:13:17.568 "method": "bdev_nvme_attach_controller" 00:13:17.568 } 00:13:17.568 EOF 00:13:17.568 )") 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:17.568 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:17.568 "params": { 00:13:17.568 "name": "Nvme1", 00:13:17.568 "trtype": "tcp", 00:13:17.568 "traddr": "10.0.0.3", 00:13:17.568 "adrfam": "ipv4", 00:13:17.568 "trsvcid": "4420", 00:13:17.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:17.568 "hdgst": false, 00:13:17.568 "ddgst": false 00:13:17.568 }, 00:13:17.569 "method": "bdev_nvme_attach_controller" 00:13:17.569 }' 00:13:17.826 [2024-12-10 11:16:24.454419] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:13:17.826 [2024-12-10 11:16:24.454576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68376 ] 00:13:17.826 [2024-12-10 11:16:24.640844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.084 [2024-12-10 11:16:24.775495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.342 [2024-12-10 11:16:24.982608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:18.599 Running I/O for 10 seconds... 00:13:20.517 4454.00 IOPS, 34.80 MiB/s [2024-12-10T11:16:28.275Z] 4460.50 IOPS, 34.85 MiB/s [2024-12-10T11:16:29.209Z] 4419.67 IOPS, 34.53 MiB/s [2024-12-10T11:16:30.582Z] 4429.50 IOPS, 34.61 MiB/s [2024-12-10T11:16:31.516Z] 4384.00 IOPS, 34.25 MiB/s [2024-12-10T11:16:32.450Z] 4398.67 IOPS, 34.36 MiB/s [2024-12-10T11:16:33.384Z] 4400.29 IOPS, 34.38 MiB/s [2024-12-10T11:16:34.317Z] 4402.62 IOPS, 34.40 MiB/s [2024-12-10T11:16:35.252Z] 4412.78 IOPS, 34.47 MiB/s [2024-12-10T11:16:35.252Z] 4403.90 IOPS, 34.41 MiB/s 00:13:28.426 Latency(us) 00:13:28.426 [2024-12-10T11:16:35.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.426 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:28.426 Verification LBA range: start 0x0 length 0x1000 00:13:28.426 Nvme1n1 : 10.02 4406.51 34.43 0.00 0.00 28962.44 4170.47 38368.35 00:13:28.426 [2024-12-10T11:16:35.252Z] =================================================================================================================== 00:13:28.426 [2024-12-10T11:16:35.252Z] Total : 4406.51 34.43 0.00 0.00 28962.44 4170.47 38368.35 00:13:29.840 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=68505 00:13:29.840 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:29.840 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:29.840 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:29.840 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:29.840 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:29.840 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:29.840 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:29.840 { 00:13:29.840 "params": { 00:13:29.840 "name": "Nvme$subsystem", 00:13:29.840 "trtype": "$TEST_TRANSPORT", 00:13:29.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:29.840 "adrfam": "ipv4", 00:13:29.840 "trsvcid": "$NVMF_PORT", 00:13:29.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:29.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:29.840 "hdgst": ${hdgst:-false}, 00:13:29.840 "ddgst": ${ddgst:-false} 00:13:29.840 }, 00:13:29.840 "method": "bdev_nvme_attach_controller" 00:13:29.840 } 00:13:29.840 EOF 00:13:29.840 )") 00:13:29.840 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:29.840 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:29.840 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:29.840 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:29.840 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:29.840 "params": { 00:13:29.840 "name": "Nvme1", 00:13:29.840 "trtype": "tcp", 00:13:29.840 "traddr": "10.0.0.3", 00:13:29.840 "adrfam": "ipv4", 00:13:29.840 "trsvcid": "4420", 00:13:29.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:29.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:29.840 "hdgst": false, 00:13:29.840 "ddgst": false 00:13:29.840 }, 00:13:29.840 "method": "bdev_nvme_attach_controller" 00:13:29.840 }' 00:13:29.840 [2024-12-10 11:16:36.230897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.840 [2024-12-10 11:16:36.230975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.840 [2024-12-10 11:16:36.242797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.840 [2024-12-10 11:16:36.242867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.840 [2024-12-10 11:16:36.254836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.840 [2024-12-10 11:16:36.254897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.840 [2024-12-10 11:16:36.270828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.840 [2024-12-10 11:16:36.270912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.840 [2024-12-10 11:16:36.278830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.840 [2024-12-10 11:16:36.278894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.840 [2024-12-10 11:16:36.290818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.840 [2024-12-10 11:16:36.290882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.840 [2024-12-10 11:16:36.302802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.840 [2024-12-10 11:16:36.302863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.840 [2024-12-10 11:16:36.310773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.840 [2024-12-10 11:16:36.310827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.840 [2024-12-10 11:16:36.322815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.840 [2024-12-10 11:16:36.322868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.840 [2024-12-10 11:16:36.334770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.840 [2024-12-10 11:16:36.334828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.840 [2024-12-10 11:16:36.342795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.840 [2024-12-10 11:16:36.342853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.840 [2024-12-10 11:16:36.350784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.840 [2024-12-10 11:16:36.350844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.840 [2024-12-10 11:16:36.356705] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:13:29.841 [2024-12-10 11:16:36.356904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68505 ] 00:13:29.841 [2024-12-10 11:16:36.358830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.358887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.370830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.370892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.378870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.378936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.390885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.390966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.402868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.402939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.414810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.414872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.426814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.426863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.434790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.434848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.442862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.442915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.454880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.454948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.466807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.466853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.478851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.478907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.490832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.490880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.502822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.502878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.514834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.514882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.526857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.526914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.538849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.538898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.550945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.551023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.556713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.841 [2024-12-10 11:16:36.562869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.562922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.574957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.575035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.586932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.586989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.594925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.595000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.606966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.607036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.618903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.618968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.630903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.630966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.841 [2024-12-10 11:16:36.638898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.841 [2024-12-10 11:16:36.638957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.646882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.646932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.654900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.654954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.666925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.666978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.678903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.678964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.683825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.101 [2024-12-10 11:16:36.690947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.691006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.702960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.703036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.715053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.715139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.726998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.727070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.734914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.734969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.742941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.742986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.750938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.750977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.762992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.763057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.775053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.775123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.787003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.787071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.799023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.799077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.810971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.811015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.822964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.823012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.835005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.835065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.842984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.843022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.850958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.851002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.858972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.859013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.870979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.871028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.875505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:30.101 [2024-12-10 11:16:36.883045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.883121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.895036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.895096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.907077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.907129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.101 [2024-12-10 11:16:36.919082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.101 [2024-12-10 11:16:36.919150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:36.931068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:36.931123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:36.942984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:36.943035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:36.955017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:36.955056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:36.967001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:36.967048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:36.979155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:36.979226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:36.991064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:36.991120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:37.003050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:37.003102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:37.015103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:37.015150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:37.027177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:37.027249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:37.039092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:37.039136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:37.051146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:37.051195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:37.063135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:37.063180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 Running I/O for 5 seconds... 00:13:30.361 [2024-12-10 11:16:37.080961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:37.081041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:37.097132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:37.097209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:37.113263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:37.113316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:37.124781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:37.124831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:37.138359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:37.138415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:37.152049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:37.152113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.361 [2024-12-10 11:16:37.168987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.361 [2024-12-10 11:16:37.169046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.186379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.186468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.199094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.199149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.217207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.217255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.233460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.233505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.249178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.249231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.262172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.262217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.278083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.278150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.294100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.294158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.310005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.310058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.323031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.323078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.341766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.341841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.355864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.355930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.373716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.373782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.390919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.390976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.404527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.404584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.420558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.420609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.620 [2024-12-10 11:16:37.438105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.620 [2024-12-10 11:16:37.438166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.455268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.455343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.471068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.471120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.487670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.487731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.500995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.501054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.519595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.519645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.533974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.534053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.549446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.549522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.566708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.566768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.582958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.583009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.595592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.595651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.611754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.611816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.630997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.631060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.644606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.644651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.662489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.662546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.676439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.676490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.879 [2024-12-10 11:16:37.694259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.879 [2024-12-10 11:16:37.694320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.709511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.709607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.725305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.725383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.742298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.742345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.759315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.759401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.772226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.772291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.790300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.790369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.803671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.803728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.818786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.818835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.835633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.835691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.848606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.848654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.868154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.868212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.882776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.882857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.900593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.900641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.914376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.914426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.929501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.929549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.946852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.946939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.138 [2024-12-10 11:16:37.960036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.138 [2024-12-10 11:16:37.960095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 [2024-12-10 11:16:37.979339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:37.979412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 [2024-12-10 11:16:37.994007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:37.994058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 [2024-12-10 11:16:38.011923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:38.011981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 [2024-12-10 11:16:38.029028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:38.029081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 [2024-12-10 11:16:38.042294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:38.042363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 [2024-12-10 11:16:38.060976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:38.061034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 8664.00 IOPS, 67.69 MiB/s [2024-12-10T11:16:38.222Z] [2024-12-10 11:16:38.075389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:38.075443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 [2024-12-10 11:16:38.090371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:38.090427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 [2024-12-10 11:16:38.104767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:38.104826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 [2024-12-10 11:16:38.122156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:38.122217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 [2024-12-10 11:16:38.136030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:38.136082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 [2024-12-10 11:16:38.151154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:38.151203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 [2024-12-10 11:16:38.168510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:38.168586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 [2024-12-10 11:16:38.185454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:38.185499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 [2024-12-10 11:16:38.202524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:38.202578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.396 [2024-12-10 11:16:38.215309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.396 [2024-12-10 11:16:38.215375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.231557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.231611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.249961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.250033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.262895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.262947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.281584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.281635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.295534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.295586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.314835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.314893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.332038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.332095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.348216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.348266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.365187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.365235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.378039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.378092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.397625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.397682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.412262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.412319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.426586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.426635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.442374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.442419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.457967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.458049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.655 [2024-12-10 11:16:38.473883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.655 [2024-12-10 11:16:38.473942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.913 [2024-12-10 11:16:38.489643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.913 [2024-12-10 11:16:38.489702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.913 [2024-12-10 11:16:38.502517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.913 [2024-12-10 11:16:38.502563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.913 [2024-12-10 11:16:38.520387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.913 [2024-12-10 11:16:38.520446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.913 [2024-12-10 11:16:38.534548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.913 [2024-12-10 11:16:38.534600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.913 [2024-12-10 11:16:38.552865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.913 [2024-12-10 11:16:38.552921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.913 [2024-12-10 11:16:38.567613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.913 [2024-12-10 11:16:38.567704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.913 [2024-12-10 11:16:38.585428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.913 [2024-12-10 11:16:38.585491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.913 [2024-12-10 11:16:38.601713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.913 [2024-12-10 11:16:38.601761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.913 [2024-12-10 11:16:38.617588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.913 [2024-12-10 11:16:38.617632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.914 [2024-12-10 11:16:38.630416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.914 [2024-12-10 11:16:38.630461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.914 [2024-12-10 11:16:38.648327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.914 [2024-12-10 11:16:38.648394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.914 [2024-12-10 11:16:38.664488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.914 [2024-12-10 11:16:38.664534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.914 [2024-12-10 11:16:38.682063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.914 [2024-12-10 11:16:38.682108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.914 [2024-12-10 11:16:38.697496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.914 [2024-12-10 11:16:38.697546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.914 [2024-12-10 11:16:38.710479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.914 [2024-12-10 11:16:38.710525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.914 [2024-12-10 11:16:38.729151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.914 [2024-12-10 11:16:38.729196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.745195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.745240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.762729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.762778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.778457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.778502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.791641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.791695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.810385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.810456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.827789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.827843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.844908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.844953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.860580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.860629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.873192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.873250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.892268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.892320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.909195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.909253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.921808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.921858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.940060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.940107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.957225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.957281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.972943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.972998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.173 [2024-12-10 11:16:38.985974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.173 [2024-12-10 11:16:38.986029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 [2024-12-10 11:16:39.005089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.005157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 [2024-12-10 11:16:39.022414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.022459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 [2024-12-10 11:16:39.038246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.038295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 [2024-12-10 11:16:39.053961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.054010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 [2024-12-10 11:16:39.070880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.070929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 8714.50 IOPS, 68.08 MiB/s [2024-12-10T11:16:39.258Z] [2024-12-10 11:16:39.083942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.084003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 [2024-12-10 11:16:39.102343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.102417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 [2024-12-10 11:16:39.119197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.119252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 [2024-12-10 11:16:39.132965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.133016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 [2024-12-10 11:16:39.151742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.151795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 [2024-12-10 11:16:39.168542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.168598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 [2024-12-10 11:16:39.184425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.184477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 [2024-12-10 11:16:39.200464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.200536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 [2024-12-10 11:16:39.213770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.213831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 [2024-12-10 11:16:39.229497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.229546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.432 [2024-12-10 11:16:39.244567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.432 [2024-12-10 11:16:39.244629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.258988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.259047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.276887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.276938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.293181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.293242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.309220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.309277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.325010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.325068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.338071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.338141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.356468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.356529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.371318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.371390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.386310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.386399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.400946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.401002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.415171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.415228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.432854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.432908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.448790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.448847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.462232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.462288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.480578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.480634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.494477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.494553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.691 [2024-12-10 11:16:39.512481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.691 [2024-12-10 11:16:39.512542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.529423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.529470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.542425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.542479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.561284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.561343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.574430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.574511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.589604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.589669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.607511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.607587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.623763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.623811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.640979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.641034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.657559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.657607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.674892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.674944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.687817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.687863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.706237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.706301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.723062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.723125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.736060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.736118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.754786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.754853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.950 [2024-12-10 11:16:39.772069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.950 [2024-12-10 11:16:39.772145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:39.784918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:39.784976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:39.801146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:39.801228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:39.818829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:39.818893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:39.835298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:39.835399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:39.848713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:39.848784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:39.868954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:39.869007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:39.883575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:39.883627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:39.899078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:39.899169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:39.916964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:39.917042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:39.934022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:39.934070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:39.950601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:39.950654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:39.964397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:39.964471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:39.982836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:39.982896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:39.997171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:39.997220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:40.014008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:40.014060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.208 [2024-12-10 11:16:40.027220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.208 [2024-12-10 11:16:40.027274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 [2024-12-10 11:16:40.045428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.045475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 [2024-12-10 11:16:40.059075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.059123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 8697.33 IOPS, 67.95 MiB/s [2024-12-10T11:16:40.293Z] [2024-12-10 11:16:40.076438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.076495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 [2024-12-10 11:16:40.093109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.093165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 [2024-12-10 11:16:40.106628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.106691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 [2024-12-10 11:16:40.123072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.123154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 [2024-12-10 11:16:40.139815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.139881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 [2024-12-10 11:16:40.152593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.152659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 [2024-12-10 11:16:40.172663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.172731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 [2024-12-10 11:16:40.189758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.189815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 [2024-12-10 11:16:40.205734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.205783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 [2024-12-10 11:16:40.218890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.218937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 [2024-12-10 11:16:40.237828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.237886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 [2024-12-10 11:16:40.254380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.254434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 [2024-12-10 11:16:40.270453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.270528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.467 [2024-12-10 11:16:40.283467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.467 [2024-12-10 11:16:40.283539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.302588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.302650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.320048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.320129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.334034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.334101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.351930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.351982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.366468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.366545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.381235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.381288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.395946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.396005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.410709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.410759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.428066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.428119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.440943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.440993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.456082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.456139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.473466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.473525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.489403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.489452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.502520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.502566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.521097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.521163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.534696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.534752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.726 [2024-12-10 11:16:40.550128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.726 [2024-12-10 11:16:40.550181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.985 [2024-12-10 11:16:40.567836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.985 [2024-12-10 11:16:40.567888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.985 [2024-12-10 11:16:40.584533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.985 [2024-12-10 11:16:40.584589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.985 [2024-12-10 11:16:40.600687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.985 [2024-12-10 11:16:40.600736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.985 [2024-12-10 11:16:40.616938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.985 [2024-12-10 11:16:40.617004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.985 [2024-12-10 11:16:40.634737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.985 [2024-12-10 11:16:40.634806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.985 [2024-12-10 11:16:40.651477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.985 [2024-12-10 11:16:40.651530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.985 [2024-12-10 11:16:40.667660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.985 [2024-12-10 11:16:40.667732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.985 [2024-12-10 11:16:40.683228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.985 [2024-12-10 11:16:40.683278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.985 [2024-12-10 11:16:40.696009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.985 [2024-12-10 11:16:40.696067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.985 [2024-12-10 11:16:40.714076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.985 [2024-12-10 11:16:40.714149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.985 [2024-12-10 11:16:40.730737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.985 [2024-12-10 11:16:40.730794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.985 [2024-12-10 11:16:40.746276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.985 [2024-12-10 11:16:40.746328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.985 [2024-12-10 11:16:40.761865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.985 [2024-12-10 11:16:40.761919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.985 [2024-12-10 11:16:40.778763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.985 [2024-12-10 11:16:40.778833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.985 [2024-12-10 11:16:40.795382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.985 [2024-12-10 11:16:40.795432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:40.812621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:40.812676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:40.829485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:40.829544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:40.841368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:40.841419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:40.855912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:40.855966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:40.870449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:40.870522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:40.887871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:40.887941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:40.900615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:40.900665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:40.915710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:40.915771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:40.933007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:40.933061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:40.949761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:40.949822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:40.967025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:40.967106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:40.980094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:40.980168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:40.998774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:40.998854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:41.016441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:41.016527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:41.036005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:41.036106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.245 [2024-12-10 11:16:41.055725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.245 [2024-12-10 11:16:41.055797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.506 8676.75 IOPS, 67.79 MiB/s [2024-12-10T11:16:41.332Z] [2024-12-10 11:16:41.072936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.506 [2024-12-10 11:16:41.072991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.506 [2024-12-10 11:16:41.089339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.506 [2024-12-10 11:16:41.089413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.506 [2024-12-10 11:16:41.101006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.506 [2024-12-10 11:16:41.101054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.506 [2024-12-10 11:16:41.115539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.506 [2024-12-10 11:16:41.115613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.506 [2024-12-10 11:16:41.133047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.506 [2024-12-10 11:16:41.133111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.506 [2024-12-10 11:16:41.148981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.506 [2024-12-10 11:16:41.149044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.506 [2024-12-10 11:16:41.161308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.506 [2024-12-10 11:16:41.161370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.506 [2024-12-10 11:16:41.178112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.507 [2024-12-10 11:16:41.178158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.507 [2024-12-10 11:16:41.194499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.507 [2024-12-10 11:16:41.194555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.507 [2024-12-10 11:16:41.210216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.507 [2024-12-10 11:16:41.210264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.507 [2024-12-10 11:16:41.226940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.507 [2024-12-10 11:16:41.226997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.507 [2024-12-10 11:16:41.239990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.507 [2024-12-10 11:16:41.240036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.507 [2024-12-10 11:16:41.258254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.507 [2024-12-10 11:16:41.258318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.507 [2024-12-10 11:16:41.271051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.507 [2024-12-10 11:16:41.271095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.507 [2024-12-10 11:16:41.288703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.507 [2024-12-10 11:16:41.288747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.507 [2024-12-10 11:16:41.305056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.507 [2024-12-10 11:16:41.305100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.507 [2024-12-10 11:16:41.323010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.507 [2024-12-10 11:16:41.323070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.335829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.335886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.354683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.354729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.368244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.368288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.386084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.386141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.403773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.403855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.417602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.417669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.433198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.433246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.449790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.449847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.462181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.462232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.476937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.476984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.493713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.493764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.510870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.510932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.526707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.526758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.539613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.539658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.557869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.557945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.765 [2024-12-10 11:16:41.574381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.765 [2024-12-10 11:16:41.574459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.591877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.591937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.607659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.607716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.622958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.623030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.641398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.641455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.658549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.658593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.671773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.671818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.689943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.689990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.704200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.704248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.722246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.722295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.738373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.738448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.752107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.752170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.770691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.770739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.786822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.786867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.803037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.803083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.816482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.816548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.024 [2024-12-10 11:16:41.835079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.024 [2024-12-10 11:16:41.835139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:41.849796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:41.849855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:41.864928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:41.864977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:41.880072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:41.880132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:41.895638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:41.895719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:41.909160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:41.909221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:41.925604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:41.925649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:41.940182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:41.940236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:41.957039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:41.957089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:41.973243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:41.973300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:41.990516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:41.990566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:42.007213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:42.007262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:42.023934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:42.024013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:42.039618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:42.039676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:42.052106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:42.052148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 8698.40 IOPS, 67.96 MiB/s [2024-12-10T11:16:42.110Z] [2024-12-10 11:16:42.071848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:42.071913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:42.086563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:42.086628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 00:13:35.284 Latency(us) 00:13:35.284 [2024-12-10T11:16:42.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.284 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:35.284 Nvme1n1 : 5.02 8694.50 67.93 0.00 0.00 14697.07 4915.20 27405.96 00:13:35.284 [2024-12-10T11:16:42.110Z] =================================================================================================================== 00:13:35.284 [2024-12-10T11:16:42.110Z] Total : 8694.50 67.93 0.00 0.00 14697.07 4915.20 27405.96 00:13:35.284 [2024-12-10 11:16:42.096483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:42.096539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.284 [2024-12-10 11:16:42.108503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.284 [2024-12-10 11:16:42.108566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.120407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.120452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.132438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.132481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.144471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.144525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.156424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.156464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.168413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.168450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.176395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.176433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.184421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.184456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.196423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.196462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.208432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.208469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.220515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.220575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.232443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.232487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.244476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.244517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.256447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.256484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.268496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.268550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.280478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.280516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.292460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.292496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.300440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.300475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.312495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.312541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.324534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.324592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.543 [2024-12-10 11:16:42.336500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.543 [2024-12-10 11:16:42.336539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.544 [2024-12-10 11:16:42.348487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.544 [2024-12-10 11:16:42.348526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.544 [2024-12-10 11:16:42.360465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.544 [2024-12-10 11:16:42.360500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.372511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.372551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.384580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.384633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.396484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.396521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.408515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.408557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.420569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.420626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.432610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.432680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.444579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.444632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.456514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.456550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.468546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.468585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.480537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.480577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.488511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.488547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.500564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.500602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.512543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.512582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.524580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.524622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.536555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.536593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.548550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.548588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.560619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.560662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.572585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.572627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.584635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.584695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.596679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.596738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.608612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.608662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.802 [2024-12-10 11:16:42.620610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.802 [2024-12-10 11:16:42.620655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.061 [2024-12-10 11:16:42.632629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.061 [2024-12-10 11:16:42.632676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.061 [2024-12-10 11:16:42.644610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.644654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.656634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.656681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.668726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.668795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.680620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.680668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.692830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.692920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.704628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.704673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.716666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.716711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.728659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.728706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.740652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.740703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.752658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.752697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.764749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.764815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.776783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.776852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.788828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.788912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.800828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.800910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.812850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.812926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.824775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.824835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.836749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.836812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.848717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.848765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.860710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.860753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.872697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.872740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.062 [2024-12-10 11:16:42.884758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.062 [2024-12-10 11:16:42.884811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.321 [2024-12-10 11:16:42.896732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.321 [2024-12-10 11:16:42.896771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.321 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68505) - No such process 00:13:36.321 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 68505 00:13:36.321 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.321 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.321 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:36.321 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.321 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:36.321 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.321 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:36.321 delay0 00:13:36.321 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.321 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:36.321 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.321 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:36.321 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.321 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:13:36.579 [2024-12-10 11:16:43.169746] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:43.140 Initializing NVMe Controllers 00:13:43.140 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:43.140 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:43.140 Initialization complete. Launching workers. 00:13:43.140 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 118 00:13:43.140 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 405, failed to submit 33 00:13:43.140 success 277, unsuccessful 128, failed 0 00:13:43.140 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:43.140 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:43.140 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:43.140 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:13:43.140 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:43.140 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:13:43.140 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:43.141 rmmod nvme_tcp 00:13:43.141 rmmod nvme_fabrics 00:13:43.141 rmmod nvme_keyring 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 68343 ']' 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 68343 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 68343 ']' 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 68343 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68343 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68343' 00:13:43.141 killing process with pid 68343 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 68343 00:13:43.141 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 68343 00:13:43.708 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:43.708 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:43.708 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:43.708 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:13:43.708 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:13:43.708 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:43.708 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:13:43.708 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:43.708 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:43.708 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:43.708 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:43.708 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:13:43.967 00:13:43.967 real 0m28.128s 00:13:43.967 user 0m45.960s 00:13:43.967 sys 0m7.079s 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:43.967 ************************************ 00:13:43.967 END TEST nvmf_zcopy 00:13:43.967 ************************************ 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:43.967 ************************************ 00:13:43.967 START TEST nvmf_nmic 00:13:43.967 ************************************ 00:13:43.967 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:44.228 * Looking for test storage... 00:13:44.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:44.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.228 --rc genhtml_branch_coverage=1 00:13:44.228 --rc genhtml_function_coverage=1 00:13:44.228 --rc genhtml_legend=1 00:13:44.228 --rc geninfo_all_blocks=1 00:13:44.228 --rc geninfo_unexecuted_blocks=1 00:13:44.228 00:13:44.228 ' 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:44.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.228 --rc genhtml_branch_coverage=1 00:13:44.228 --rc genhtml_function_coverage=1 00:13:44.228 --rc genhtml_legend=1 00:13:44.228 --rc geninfo_all_blocks=1 00:13:44.228 --rc geninfo_unexecuted_blocks=1 00:13:44.228 00:13:44.228 ' 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:44.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.228 --rc genhtml_branch_coverage=1 00:13:44.228 --rc genhtml_function_coverage=1 00:13:44.228 --rc genhtml_legend=1 00:13:44.228 --rc geninfo_all_blocks=1 00:13:44.228 --rc geninfo_unexecuted_blocks=1 00:13:44.228 00:13:44.228 ' 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:44.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.228 --rc genhtml_branch_coverage=1 00:13:44.228 --rc genhtml_function_coverage=1 00:13:44.228 --rc genhtml_legend=1 00:13:44.228 --rc geninfo_all_blocks=1 00:13:44.228 --rc geninfo_unexecuted_blocks=1 00:13:44.228 00:13:44.228 ' 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.228 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:44.229 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.229 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:44.229 Cannot find device "nvmf_init_br" 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:44.229 Cannot find device "nvmf_init_br2" 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:44.229 Cannot find device "nvmf_tgt_br" 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:44.229 Cannot find device "nvmf_tgt_br2" 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:13:44.229 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:44.488 Cannot find device "nvmf_init_br" 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:44.488 Cannot find device "nvmf_init_br2" 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:44.488 Cannot find device "nvmf_tgt_br" 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:44.488 Cannot find device "nvmf_tgt_br2" 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:44.488 Cannot find device "nvmf_br" 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:44.488 Cannot find device "nvmf_init_if" 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:44.488 Cannot find device "nvmf_init_if2" 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:44.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:44.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:44.488 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:44.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:44.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:13:44.747 00:13:44.747 --- 10.0.0.3 ping statistics --- 00:13:44.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.747 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:44.747 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:44.747 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:13:44.747 00:13:44.747 --- 10.0.0.4 ping statistics --- 00:13:44.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.747 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:44.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:44.747 00:13:44.747 --- 10.0.0.1 ping statistics --- 00:13:44.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.747 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:44.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:13:44.747 00:13:44.747 --- 10.0.0.2 ping statistics --- 00:13:44.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.747 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=68905 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 68905 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 68905 ']' 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.747 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:44.747 [2024-12-10 11:16:51.504152] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:13:44.747 [2024-12-10 11:16:51.504319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.006 [2024-12-10 11:16:51.691409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.006 [2024-12-10 11:16:51.822694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.006 [2024-12-10 11:16:51.822755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.006 [2024-12-10 11:16:51.822776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.006 [2024-12-10 11:16:51.822789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.006 [2024-12-10 11:16:51.822802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.006 [2024-12-10 11:16:51.824593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.006 [2024-12-10 11:16:51.824722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.006 [2024-12-10 11:16:51.824855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.006 [2024-12-10 11:16:51.824876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.264 [2024-12-10 11:16:52.010208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:45.830 [2024-12-10 11:16:52.559730] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:45.830 Malloc0 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.830 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.090 [2024-12-10 11:16:52.671764] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.090 test case1: single bdev can't be used in multiple subsystems 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.090 [2024-12-10 11:16:52.695625] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:46.090 [2024-12-10 11:16:52.695705] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:46.090 [2024-12-10 11:16:52.695731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.090 request: 00:13:46.090 { 00:13:46.090 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:46.090 "namespace": { 00:13:46.090 "bdev_name": "Malloc0", 00:13:46.090 "no_auto_visible": false, 00:13:46.090 "hide_metadata": false 00:13:46.090 }, 00:13:46.090 "method": "nvmf_subsystem_add_ns", 00:13:46.090 "req_id": 1 00:13:46.090 } 00:13:46.090 Got JSON-RPC error response 00:13:46.090 response: 00:13:46.090 { 00:13:46.090 "code": -32602, 00:13:46.090 "message": "Invalid parameters" 00:13:46.090 } 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:46.090 Adding namespace failed - expected result. 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:46.090 test case2: host connect to nvmf target in multiple paths 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.090 [2024-12-10 11:16:52.707858] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:46.090 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:13:46.349 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.349 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:13:46.349 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.349 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:46.349 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:13:48.247 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:48.247 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:48.247 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.247 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:48.247 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.247 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:13:48.247 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:48.247 [global] 00:13:48.247 thread=1 00:13:48.247 invalidate=1 00:13:48.247 rw=write 00:13:48.247 time_based=1 00:13:48.247 runtime=1 00:13:48.247 ioengine=libaio 00:13:48.247 direct=1 00:13:48.247 bs=4096 00:13:48.247 iodepth=1 00:13:48.247 norandommap=0 00:13:48.247 numjobs=1 00:13:48.247 00:13:48.247 verify_dump=1 00:13:48.247 verify_backlog=512 00:13:48.247 verify_state_save=0 00:13:48.247 do_verify=1 00:13:48.247 verify=crc32c-intel 00:13:48.247 [job0] 00:13:48.247 filename=/dev/nvme0n1 00:13:48.247 Could not set queue depth (nvme0n1) 00:13:48.503 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:48.503 fio-3.35 00:13:48.503 Starting 1 thread 00:13:49.878 00:13:49.878 job0: (groupid=0, jobs=1): err= 0: pid=69001: Tue Dec 10 11:16:56 2024 00:13:49.878 read: IOPS=2300, BW=9203KiB/s (9424kB/s)(9212KiB/1001msec) 00:13:49.878 slat (nsec): min=13616, max=62527, avg=16680.42, stdev=3697.27 00:13:49.878 clat (usec): min=185, max=3013, avg=225.50, stdev=95.59 00:13:49.878 lat (usec): min=202, max=3041, avg=242.18, stdev=96.24 00:13:49.878 clat percentiles (usec): 00:13:49.878 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 210], 00:13:49.878 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 221], 00:13:49.878 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 247], 00:13:49.878 | 99.00th=[ 314], 99.50th=[ 478], 99.90th=[ 1778], 99.95th=[ 2704], 00:13:49.878 | 99.99th=[ 2999] 00:13:49.878 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:49.878 slat (usec): min=19, max=137, avg=25.22, stdev= 6.93 00:13:49.878 clat (usec): min=115, max=1179, avg=143.53, stdev=27.25 00:13:49.878 lat (usec): min=141, max=1212, avg=168.75, stdev=29.64 00:13:49.878 clat percentiles (usec): 00:13:49.878 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 131], 00:13:49.878 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:13:49.878 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 163], 95.00th=[ 174], 00:13:49.878 | 99.00th=[ 208], 99.50th=[ 221], 99.90th=[ 330], 99.95th=[ 510], 00:13:49.878 | 99.99th=[ 1188] 00:13:49.878 bw ( KiB/s): min=11680, max=11680, per=100.00%, avg=11680.00, stdev= 0.00, samples=1 00:13:49.878 iops : min= 2920, max= 2920, avg=2920.00, stdev= 0.00, samples=1 00:13:49.878 lat (usec) : 250=97.92%, 500=1.83%, 750=0.08%, 1000=0.06% 00:13:49.878 lat (msec) : 2=0.06%, 4=0.04% 00:13:49.878 cpu : usr=1.60%, sys=8.60%, ctx=4863, majf=0, minf=5 00:13:49.878 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:49.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.878 issued rwts: total=2303,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.878 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:49.878 00:13:49.878 Run status group 0 (all jobs): 00:13:49.878 READ: bw=9203KiB/s (9424kB/s), 9203KiB/s-9203KiB/s (9424kB/s-9424kB/s), io=9212KiB (9433kB), run=1001-1001msec 00:13:49.878 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:13:49.878 00:13:49.878 Disk stats (read/write): 00:13:49.878 nvme0n1: ios=2098/2330, merge=0/0, ticks=483/356, in_queue=839, util=91.28% 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:49.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:49.878 rmmod nvme_tcp 00:13:49.878 rmmod nvme_fabrics 00:13:49.878 rmmod nvme_keyring 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 68905 ']' 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 68905 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 68905 ']' 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 68905 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68905 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.878 killing process with pid 68905 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68905' 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 68905 00:13:49.878 11:16:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 68905 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:13:51.256 00:13:51.256 real 0m7.182s 00:13:51.256 user 0m21.559s 00:13:51.256 sys 0m2.383s 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.256 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:51.256 ************************************ 00:13:51.256 END TEST nvmf_nmic 00:13:51.256 ************************************ 00:13:51.256 11:16:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:51.256 11:16:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:51.256 11:16:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.256 11:16:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:51.256 ************************************ 00:13:51.256 START TEST nvmf_fio_target 00:13:51.256 ************************************ 00:13:51.256 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:51.515 * Looking for test storage... 00:13:51.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:51.515 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:51.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.516 --rc genhtml_branch_coverage=1 00:13:51.516 --rc genhtml_function_coverage=1 00:13:51.516 --rc genhtml_legend=1 00:13:51.516 --rc geninfo_all_blocks=1 00:13:51.516 --rc geninfo_unexecuted_blocks=1 00:13:51.516 00:13:51.516 ' 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:51.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.516 --rc genhtml_branch_coverage=1 00:13:51.516 --rc genhtml_function_coverage=1 00:13:51.516 --rc genhtml_legend=1 00:13:51.516 --rc geninfo_all_blocks=1 00:13:51.516 --rc geninfo_unexecuted_blocks=1 00:13:51.516 00:13:51.516 ' 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:51.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.516 --rc genhtml_branch_coverage=1 00:13:51.516 --rc genhtml_function_coverage=1 00:13:51.516 --rc genhtml_legend=1 00:13:51.516 --rc geninfo_all_blocks=1 00:13:51.516 --rc geninfo_unexecuted_blocks=1 00:13:51.516 00:13:51.516 ' 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:51.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.516 --rc genhtml_branch_coverage=1 00:13:51.516 --rc genhtml_function_coverage=1 00:13:51.516 --rc genhtml_legend=1 00:13:51.516 --rc geninfo_all_blocks=1 00:13:51.516 --rc geninfo_unexecuted_blocks=1 00:13:51.516 00:13:51.516 ' 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:51.516 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:51.516 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:51.517 Cannot find device "nvmf_init_br" 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:51.517 Cannot find device "nvmf_init_br2" 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:51.517 Cannot find device "nvmf_tgt_br" 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:51.517 Cannot find device "nvmf_tgt_br2" 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:51.517 Cannot find device "nvmf_init_br" 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:51.517 Cannot find device "nvmf_init_br2" 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:51.517 Cannot find device "nvmf_tgt_br" 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:51.517 Cannot find device "nvmf_tgt_br2" 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:51.517 Cannot find device "nvmf_br" 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:51.517 Cannot find device "nvmf_init_if" 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:13:51.517 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:51.775 Cannot find device "nvmf_init_if2" 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:51.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:51.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:51.775 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:51.776 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:51.776 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:51.776 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:51.776 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:51.776 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:51.776 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:51.776 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:51.776 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:51.776 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:51.776 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:52.034 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:52.034 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:13:52.034 00:13:52.034 --- 10.0.0.3 ping statistics --- 00:13:52.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.034 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:52.034 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:52.034 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 00:13:52.034 00:13:52.034 --- 10.0.0.4 ping statistics --- 00:13:52.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.034 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:52.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:13:52.034 00:13:52.034 --- 10.0.0.1 ping statistics --- 00:13:52.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.034 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:52.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:13:52.034 00:13:52.034 --- 10.0.0.2 ping statistics --- 00:13:52.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.034 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:52.034 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:52.035 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.035 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=69241 00:13:52.035 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:52.035 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 69241 00:13:52.035 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 69241 ']' 00:13:52.035 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.035 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.035 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.035 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.035 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.035 [2024-12-10 11:16:58.790265] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:13:52.035 [2024-12-10 11:16:58.790450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.295 [2024-12-10 11:16:58.979309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.295 [2024-12-10 11:16:59.111062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.295 [2024-12-10 11:16:59.111141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.295 [2024-12-10 11:16:59.111167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.295 [2024-12-10 11:16:59.111182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.295 [2024-12-10 11:16:59.111198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.295 [2024-12-10 11:16:59.113387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.295 [2024-12-10 11:16:59.113438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.295 [2024-12-10 11:16:59.113548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.295 [2024-12-10 11:16:59.113559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.567 [2024-12-10 11:16:59.327623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:53.132 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.132 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:13:53.132 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.132 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:53.132 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.132 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.132 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:53.391 [2024-12-10 11:17:00.141840] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.391 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:53.958 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:53.958 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:54.215 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:54.215 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:54.783 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:54.783 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:55.041 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:55.041 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:55.300 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:55.558 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:55.558 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:56.126 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:56.126 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:56.384 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:56.384 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:56.642 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:57.209 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:57.209 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:57.468 11:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:57.468 11:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:57.726 11:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:57.984 [2024-12-10 11:17:04.618650] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:57.984 11:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:58.244 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:58.502 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:58.760 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:58.760 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:13:58.760 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:58.760 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:13:58.760 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:13:58.760 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:14:00.661 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:00.661 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:00.661 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:00.919 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:14:00.919 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:00.919 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:14:00.919 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:00.919 [global] 00:14:00.919 thread=1 00:14:00.919 invalidate=1 00:14:00.919 rw=write 00:14:00.919 time_based=1 00:14:00.919 runtime=1 00:14:00.919 ioengine=libaio 00:14:00.919 direct=1 00:14:00.919 bs=4096 00:14:00.919 iodepth=1 00:14:00.919 norandommap=0 00:14:00.919 numjobs=1 00:14:00.919 00:14:00.919 verify_dump=1 00:14:00.919 verify_backlog=512 00:14:00.919 verify_state_save=0 00:14:00.919 do_verify=1 00:14:00.919 verify=crc32c-intel 00:14:00.919 [job0] 00:14:00.919 filename=/dev/nvme0n1 00:14:00.919 [job1] 00:14:00.919 filename=/dev/nvme0n2 00:14:00.919 [job2] 00:14:00.919 filename=/dev/nvme0n3 00:14:00.919 [job3] 00:14:00.919 filename=/dev/nvme0n4 00:14:00.919 Could not set queue depth (nvme0n1) 00:14:00.919 Could not set queue depth (nvme0n2) 00:14:00.919 Could not set queue depth (nvme0n3) 00:14:00.919 Could not set queue depth (nvme0n4) 00:14:00.919 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.919 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.919 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.919 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.919 fio-3.35 00:14:00.919 Starting 4 threads 00:14:02.293 00:14:02.293 job0: (groupid=0, jobs=1): err= 0: pid=69441: Tue Dec 10 11:17:08 2024 00:14:02.293 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:14:02.293 slat (nsec): min=8648, max=39688, avg=14601.06, stdev=3263.94 00:14:02.293 clat (usec): min=169, max=2510, avg=206.20, stdev=63.49 00:14:02.293 lat (usec): min=184, max=2526, avg=220.80, stdev=62.92 00:14:02.293 clat percentiles (usec): 00:14:02.293 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 184], 00:14:02.293 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:14:02.293 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 285], 95.00th=[ 306], 00:14:02.293 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 734], 99.95th=[ 1188], 00:14:02.293 | 99.99th=[ 2507] 00:14:02.293 write: IOPS=2579, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec); 0 zone resets 00:14:02.293 slat (nsec): min=14362, max=91645, avg=22570.72, stdev=5651.93 00:14:02.293 clat (usec): min=114, max=315, avg=142.04, stdev=15.76 00:14:02.293 lat (usec): min=133, max=406, avg=164.61, stdev=18.02 00:14:02.293 clat percentiles (usec): 00:14:02.293 | 1.00th=[ 119], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 129], 00:14:02.293 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 145], 00:14:02.293 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 165], 00:14:02.293 | 99.00th=[ 186], 99.50th=[ 206], 99.90th=[ 302], 99.95th=[ 314], 00:14:02.293 | 99.99th=[ 314] 00:14:02.293 bw ( KiB/s): min=12288, max=12288, per=37.44%, avg=12288.00, stdev= 0.00, samples=1 00:14:02.293 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:02.293 lat (usec) : 250=94.57%, 500=5.37%, 750=0.02% 00:14:02.293 lat (msec) : 2=0.02%, 4=0.02% 00:14:02.293 cpu : usr=1.90%, sys=7.80%, ctx=5144, majf=0, minf=9 00:14:02.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.293 issued rwts: total=2560,2582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:02.293 job1: (groupid=0, jobs=1): err= 0: pid=69442: Tue Dec 10 11:17:08 2024 00:14:02.293 read: IOPS=1501, BW=6006KiB/s (6150kB/s)(6012KiB/1001msec) 00:14:02.293 slat (nsec): min=9362, max=54992, avg=17008.02, stdev=5581.54 00:14:02.293 clat (usec): min=172, max=1910, avg=318.76, stdev=90.83 00:14:02.293 lat (usec): min=184, max=1920, avg=335.76, stdev=93.06 00:14:02.293 clat percentiles (usec): 00:14:02.293 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 206], 00:14:02.293 | 30.00th=[ 297], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 338], 00:14:02.293 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 412], 95.00th=[ 420], 00:14:02.293 | 99.00th=[ 441], 99.50th=[ 453], 99.90th=[ 734], 99.95th=[ 1909], 00:14:02.293 | 99.99th=[ 1909] 00:14:02.293 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:02.293 slat (nsec): min=11356, max=78512, avg=24152.40, stdev=8276.68 00:14:02.293 clat (usec): min=153, max=919, avg=294.12, stdev=79.26 00:14:02.293 lat (usec): min=191, max=973, avg=318.27, stdev=81.87 00:14:02.293 clat percentiles (usec): 00:14:02.293 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 247], 00:14:02.293 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 277], 00:14:02.293 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 433], 95.00th=[ 474], 00:14:02.294 | 99.00th=[ 562], 99.50th=[ 603], 99.90th=[ 914], 99.95th=[ 922], 00:14:02.294 | 99.99th=[ 922] 00:14:02.294 bw ( KiB/s): min= 6360, max= 6360, per=19.38%, avg=6360.00, stdev= 0.00, samples=1 00:14:02.294 iops : min= 1590, max= 1590, avg=1590.00, stdev= 0.00, samples=1 00:14:02.294 lat (usec) : 250=24.71%, 500=73.54%, 750=1.61%, 1000=0.10% 00:14:02.294 lat (msec) : 2=0.03% 00:14:02.294 cpu : usr=1.70%, sys=5.00%, ctx=3039, majf=0, minf=13 00:14:02.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.294 issued rwts: total=1503,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:02.294 job2: (groupid=0, jobs=1): err= 0: pid=69444: Tue Dec 10 11:17:08 2024 00:14:02.294 read: IOPS=1376, BW=5506KiB/s (5639kB/s)(5512KiB/1001msec) 00:14:02.294 slat (nsec): min=10236, max=62294, avg=21833.09, stdev=8968.46 00:14:02.294 clat (usec): min=273, max=1967, avg=343.82, stdev=62.67 00:14:02.294 lat (usec): min=287, max=1981, avg=365.65, stdev=67.57 00:14:02.294 clat percentiles (usec): 00:14:02.294 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 302], 00:14:02.294 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 343], 00:14:02.294 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 404], 95.00th=[ 412], 00:14:02.294 | 99.00th=[ 429], 99.50th=[ 437], 99.90th=[ 758], 99.95th=[ 1975], 00:14:02.294 | 99.99th=[ 1975] 00:14:02.294 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:02.294 slat (usec): min=13, max=108, avg=31.43, stdev=12.45 00:14:02.294 clat (usec): min=169, max=1030, avg=286.40, stdev=73.25 00:14:02.294 lat (usec): min=226, max=1073, avg=317.83, stdev=80.90 00:14:02.294 clat percentiles (usec): 00:14:02.294 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 243], 00:14:02.294 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:14:02.294 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 416], 95.00th=[ 453], 00:14:02.294 | 99.00th=[ 519], 99.50th=[ 562], 99.90th=[ 922], 99.95th=[ 1029], 00:14:02.294 | 99.99th=[ 1029] 00:14:02.294 bw ( KiB/s): min= 6360, max= 6360, per=19.38%, avg=6360.00, stdev= 0.00, samples=1 00:14:02.294 iops : min= 1590, max= 1590, avg=1590.00, stdev= 0.00, samples=1 00:14:02.294 lat (usec) : 250=16.33%, 500=82.57%, 750=0.96%, 1000=0.07% 00:14:02.294 lat (msec) : 2=0.07% 00:14:02.294 cpu : usr=1.70%, sys=7.10%, ctx=2916, majf=0, minf=11 00:14:02.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.294 issued rwts: total=1378,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:02.294 job3: (groupid=0, jobs=1): err= 0: pid=69445: Tue Dec 10 11:17:08 2024 00:14:02.294 read: IOPS=2066, BW=8268KiB/s (8466kB/s)(8276KiB/1001msec) 00:14:02.294 slat (nsec): min=11898, max=75069, avg=20075.00, stdev=7556.13 00:14:02.294 clat (usec): min=181, max=3146, avg=220.80, stdev=66.89 00:14:02.294 lat (usec): min=195, max=3163, avg=240.88, stdev=67.55 00:14:02.294 clat percentiles (usec): 00:14:02.294 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:14:02.294 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:14:02.294 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 245], 00:14:02.294 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 420], 99.95th=[ 562], 00:14:02.294 | 99.99th=[ 3163] 00:14:02.294 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:14:02.294 slat (usec): min=15, max=107, avg=27.59, stdev= 8.76 00:14:02.294 clat (usec): min=132, max=793, avg=164.15, stdev=17.82 00:14:02.294 lat (usec): min=151, max=816, avg=191.74, stdev=20.18 00:14:02.294 clat percentiles (usec): 00:14:02.294 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 153], 00:14:02.294 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:14:02.294 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 186], 00:14:02.294 | 99.00th=[ 202], 99.50th=[ 206], 99.90th=[ 253], 99.95th=[ 306], 00:14:02.294 | 99.99th=[ 791] 00:14:02.294 bw ( KiB/s): min=10784, max=10784, per=32.85%, avg=10784.00, stdev= 0.00, samples=1 00:14:02.294 iops : min= 2696, max= 2696, avg=2696.00, stdev= 0.00, samples=1 00:14:02.294 lat (usec) : 250=98.70%, 500=1.23%, 750=0.02%, 1000=0.02% 00:14:02.294 lat (msec) : 4=0.02% 00:14:02.294 cpu : usr=2.20%, sys=9.10%, ctx=4630, majf=0, minf=3 00:14:02.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.294 issued rwts: total=2069,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:02.294 00:14:02.294 Run status group 0 (all jobs): 00:14:02.294 READ: bw=29.3MiB/s (30.7MB/s), 5506KiB/s-9.99MiB/s (5639kB/s-10.5MB/s), io=29.3MiB (30.8MB), run=1001-1001msec 00:14:02.294 WRITE: bw=32.1MiB/s (33.6MB/s), 6138KiB/s-10.1MiB/s (6285kB/s-10.6MB/s), io=32.1MiB (33.6MB), run=1001-1001msec 00:14:02.294 00:14:02.294 Disk stats (read/write): 00:14:02.294 nvme0n1: ios=2098/2535, merge=0/0, ticks=451/372, in_queue=823, util=87.98% 00:14:02.294 nvme0n2: ios=1072/1432, merge=0/0, ticks=342/383, in_queue=725, util=88.40% 00:14:02.294 nvme0n3: ios=1024/1432, merge=0/0, ticks=334/417, in_queue=751, util=89.07% 00:14:02.294 nvme0n4: ios=1885/2048, merge=0/0, ticks=435/354, in_queue=789, util=89.73% 00:14:02.294 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:02.294 [global] 00:14:02.294 thread=1 00:14:02.294 invalidate=1 00:14:02.294 rw=randwrite 00:14:02.294 time_based=1 00:14:02.294 runtime=1 00:14:02.294 ioengine=libaio 00:14:02.294 direct=1 00:14:02.294 bs=4096 00:14:02.294 iodepth=1 00:14:02.294 norandommap=0 00:14:02.294 numjobs=1 00:14:02.294 00:14:02.294 verify_dump=1 00:14:02.294 verify_backlog=512 00:14:02.294 verify_state_save=0 00:14:02.294 do_verify=1 00:14:02.294 verify=crc32c-intel 00:14:02.294 [job0] 00:14:02.294 filename=/dev/nvme0n1 00:14:02.294 [job1] 00:14:02.294 filename=/dev/nvme0n2 00:14:02.294 [job2] 00:14:02.294 filename=/dev/nvme0n3 00:14:02.294 [job3] 00:14:02.294 filename=/dev/nvme0n4 00:14:02.294 Could not set queue depth (nvme0n1) 00:14:02.294 Could not set queue depth (nvme0n2) 00:14:02.294 Could not set queue depth (nvme0n3) 00:14:02.294 Could not set queue depth (nvme0n4) 00:14:02.294 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:02.294 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:02.294 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:02.294 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:02.294 fio-3.35 00:14:02.294 Starting 4 threads 00:14:03.673 00:14:03.673 job0: (groupid=0, jobs=1): err= 0: pid=69507: Tue Dec 10 11:17:10 2024 00:14:03.673 read: IOPS=1380, BW=5522KiB/s (5655kB/s)(5528KiB/1001msec) 00:14:03.673 slat (usec): min=10, max=182, avg=18.26, stdev= 8.79 00:14:03.673 clat (usec): min=150, max=3403, avg=349.82, stdev=148.83 00:14:03.673 lat (usec): min=185, max=3424, avg=368.08, stdev=150.62 00:14:03.673 clat percentiles (usec): 00:14:03.673 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 192], 00:14:03.673 | 30.00th=[ 206], 40.00th=[ 330], 50.00th=[ 404], 60.00th=[ 416], 00:14:03.673 | 70.00th=[ 429], 80.00th=[ 441], 90.00th=[ 465], 95.00th=[ 537], 00:14:03.673 | 99.00th=[ 586], 99.50th=[ 660], 99.90th=[ 1188], 99.95th=[ 3392], 00:14:03.673 | 99.99th=[ 3392] 00:14:03.673 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:03.673 slat (usec): min=13, max=184, avg=25.20, stdev= 8.50 00:14:03.673 clat (usec): min=116, max=712, avg=290.20, stdev=122.17 00:14:03.673 lat (usec): min=137, max=735, avg=315.40, stdev=122.71 00:14:03.673 clat percentiles (usec): 00:14:03.673 | 1.00th=[ 119], 5.00th=[ 131], 10.00th=[ 141], 20.00th=[ 149], 00:14:03.673 | 30.00th=[ 163], 40.00th=[ 262], 50.00th=[ 318], 60.00th=[ 334], 00:14:03.673 | 70.00th=[ 351], 80.00th=[ 379], 90.00th=[ 469], 95.00th=[ 494], 00:14:03.673 | 99.00th=[ 603], 99.50th=[ 635], 99.90th=[ 676], 99.95th=[ 709], 00:14:03.673 | 99.99th=[ 709] 00:14:03.673 bw ( KiB/s): min= 5360, max= 5360, per=21.11%, avg=5360.00, stdev= 0.00, samples=1 00:14:03.673 iops : min= 1340, max= 1340, avg=1340.00, stdev= 0.00, samples=1 00:14:03.673 lat (usec) : 250=35.54%, 500=58.64%, 750=5.69%, 1000=0.07% 00:14:03.673 lat (msec) : 2=0.03%, 4=0.03% 00:14:03.673 cpu : usr=1.50%, sys=5.40%, ctx=2920, majf=0, minf=9 00:14:03.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:03.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.673 issued rwts: total=1382,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:03.673 job1: (groupid=0, jobs=1): err= 0: pid=69508: Tue Dec 10 11:17:10 2024 00:14:03.673 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:14:03.673 slat (usec): min=14, max=101, avg=31.81, stdev=10.73 00:14:03.673 clat (usec): min=204, max=3507, avg=442.95, stdev=203.39 00:14:03.673 lat (usec): min=226, max=3537, avg=474.76, stdev=205.84 00:14:03.673 clat percentiles (usec): 00:14:03.673 | 1.00th=[ 237], 5.00th=[ 318], 10.00th=[ 359], 20.00th=[ 383], 00:14:03.673 | 30.00th=[ 392], 40.00th=[ 400], 50.00th=[ 408], 60.00th=[ 416], 00:14:03.673 | 70.00th=[ 424], 80.00th=[ 441], 90.00th=[ 553], 95.00th=[ 783], 00:14:03.673 | 99.00th=[ 938], 99.50th=[ 1221], 99.90th=[ 3392], 99.95th=[ 3523], 00:14:03.673 | 99.99th=[ 3523] 00:14:03.673 write: IOPS=1446, BW=5786KiB/s (5925kB/s)(5792KiB/1001msec); 0 zone resets 00:14:03.673 slat (usec): min=7, max=1302, avg=44.12, stdev=44.08 00:14:03.673 clat (usec): min=2, max=586, avg=304.56, stdev=89.17 00:14:03.673 lat (usec): min=156, max=1304, avg=348.68, stdev=98.56 00:14:03.673 clat percentiles (usec): 00:14:03.673 | 1.00th=[ 139], 5.00th=[ 151], 10.00th=[ 165], 20.00th=[ 217], 00:14:03.673 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 326], 00:14:03.673 | 70.00th=[ 347], 80.00th=[ 371], 90.00th=[ 416], 95.00th=[ 445], 00:14:03.673 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[ 570], 99.95th=[ 586], 00:14:03.673 | 99.99th=[ 586] 00:14:03.673 bw ( KiB/s): min= 6608, max= 6608, per=26.02%, avg=6608.00, stdev= 0.00, samples=1 00:14:03.673 iops : min= 1652, max= 1652, avg=1652.00, stdev= 0.00, samples=1 00:14:03.673 lat (usec) : 4=0.04%, 250=13.43%, 500=81.03%, 750=2.87%, 1000=2.22% 00:14:03.673 lat (msec) : 2=0.28%, 4=0.12% 00:14:03.673 cpu : usr=1.30%, sys=7.60%, ctx=2825, majf=0, minf=7 00:14:03.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:03.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.673 issued rwts: total=1024,1448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:03.673 job2: (groupid=0, jobs=1): err= 0: pid=69509: Tue Dec 10 11:17:10 2024 00:14:03.673 read: IOPS=1799, BW=7197KiB/s (7370kB/s)(7204KiB/1001msec) 00:14:03.673 slat (usec): min=7, max=151, avg=19.68, stdev=13.74 00:14:03.673 clat (usec): min=190, max=1088, avg=269.15, stdev=89.35 00:14:03.673 lat (usec): min=203, max=1149, avg=288.83, stdev=95.97 00:14:03.673 clat percentiles (usec): 00:14:03.673 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 215], 00:14:03.673 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 247], 00:14:03.673 | 70.00th=[ 260], 80.00th=[ 297], 90.00th=[ 412], 95.00th=[ 465], 00:14:03.673 | 99.00th=[ 586], 99.50th=[ 627], 99.90th=[ 783], 99.95th=[ 1090], 00:14:03.673 | 99.99th=[ 1090] 00:14:03.673 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:03.673 slat (usec): min=9, max=180, avg=28.08, stdev=15.23 00:14:03.673 clat (usec): min=130, max=1475, avg=202.10, stdev=69.74 00:14:03.673 lat (usec): min=150, max=1506, avg=230.18, stdev=76.53 00:14:03.673 clat percentiles (usec): 00:14:03.673 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 153], 20.00th=[ 161], 00:14:03.673 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 180], 60.00th=[ 188], 00:14:03.673 | 70.00th=[ 206], 80.00th=[ 237], 90.00th=[ 273], 95.00th=[ 338], 00:14:03.673 | 99.00th=[ 453], 99.50th=[ 510], 99.90th=[ 545], 99.95th=[ 570], 00:14:03.673 | 99.99th=[ 1483] 00:14:03.673 bw ( KiB/s): min= 8192, max= 8192, per=32.26%, avg=8192.00, stdev= 0.00, samples=1 00:14:03.673 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:03.673 lat (usec) : 250=74.72%, 500=23.43%, 750=1.71%, 1000=0.08% 00:14:03.673 lat (msec) : 2=0.05% 00:14:03.673 cpu : usr=2.30%, sys=6.40%, ctx=4119, majf=0, minf=15 00:14:03.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:03.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.673 issued rwts: total=1801,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:03.673 job3: (groupid=0, jobs=1): err= 0: pid=69510: Tue Dec 10 11:17:10 2024 00:14:03.673 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:14:03.673 slat (usec): min=7, max=282, avg=24.97, stdev=13.21 00:14:03.673 clat (usec): min=269, max=3291, avg=428.27, stdev=117.91 00:14:03.673 lat (usec): min=291, max=3333, avg=453.24, stdev=119.96 00:14:03.673 clat percentiles (usec): 00:14:03.673 | 1.00th=[ 289], 5.00th=[ 310], 10.00th=[ 338], 20.00th=[ 392], 00:14:03.673 | 30.00th=[ 404], 40.00th=[ 412], 50.00th=[ 420], 60.00th=[ 429], 00:14:03.673 | 70.00th=[ 437], 80.00th=[ 453], 90.00th=[ 515], 95.00th=[ 545], 00:14:03.673 | 99.00th=[ 660], 99.50th=[ 742], 99.90th=[ 1598], 99.95th=[ 3294], 00:14:03.673 | 99.99th=[ 3294] 00:14:03.673 write: IOPS=1321, BW=5287KiB/s (5414kB/s)(5292KiB/1001msec); 0 zone resets 00:14:03.673 slat (usec): min=7, max=215, avg=41.74, stdev=26.20 00:14:03.673 clat (usec): min=167, max=1106, avg=358.11, stdev=83.87 00:14:03.673 lat (usec): min=246, max=1119, avg=399.85, stdev=89.54 00:14:03.673 clat percentiles (usec): 00:14:03.673 | 1.00th=[ 204], 5.00th=[ 233], 10.00th=[ 253], 20.00th=[ 306], 00:14:03.673 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 355], 00:14:03.673 | 70.00th=[ 392], 80.00th=[ 441], 90.00th=[ 469], 95.00th=[ 490], 00:14:03.673 | 99.00th=[ 586], 99.50th=[ 619], 99.90th=[ 889], 99.95th=[ 1106], 00:14:03.673 | 99.99th=[ 1106] 00:14:03.673 bw ( KiB/s): min= 5360, max= 5360, per=21.11%, avg=5360.00, stdev= 0.00, samples=1 00:14:03.673 iops : min= 1340, max= 1340, avg=1340.00, stdev= 0.00, samples=1 00:14:03.673 lat (usec) : 250=5.16%, 500=87.09%, 750=7.50%, 1000=0.13% 00:14:03.673 lat (msec) : 2=0.09%, 4=0.04% 00:14:03.673 cpu : usr=1.60%, sys=6.40%, ctx=2653, majf=0, minf=14 00:14:03.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:03.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.673 issued rwts: total=1024,1323,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:03.673 00:14:03.673 Run status group 0 (all jobs): 00:14:03.673 READ: bw=20.4MiB/s (21.4MB/s), 4092KiB/s-7197KiB/s (4190kB/s-7370kB/s), io=20.4MiB (21.4MB), run=1001-1001msec 00:14:03.673 WRITE: bw=24.8MiB/s (26.0MB/s), 5287KiB/s-8184KiB/s (5414kB/s-8380kB/s), io=24.8MiB (26.0MB), run=1001-1001msec 00:14:03.673 00:14:03.673 Disk stats (read/write): 00:14:03.673 nvme0n1: ios=1074/1157, merge=0/0, ticks=426/322, in_queue=748, util=88.78% 00:14:03.673 nvme0n2: ios=1073/1135, merge=0/0, ticks=475/343, in_queue=818, util=87.56% 00:14:03.673 nvme0n3: ios=1545/2024, merge=0/0, ticks=382/410, in_queue=792, util=89.27% 00:14:03.673 nvme0n4: ios=1013/1024, merge=0/0, ticks=412/364, in_queue=776, util=89.30% 00:14:03.673 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:03.674 [global] 00:14:03.674 thread=1 00:14:03.674 invalidate=1 00:14:03.674 rw=write 00:14:03.674 time_based=1 00:14:03.674 runtime=1 00:14:03.674 ioengine=libaio 00:14:03.674 direct=1 00:14:03.674 bs=4096 00:14:03.674 iodepth=128 00:14:03.674 norandommap=0 00:14:03.674 numjobs=1 00:14:03.674 00:14:03.674 verify_dump=1 00:14:03.674 verify_backlog=512 00:14:03.674 verify_state_save=0 00:14:03.674 do_verify=1 00:14:03.674 verify=crc32c-intel 00:14:03.674 [job0] 00:14:03.674 filename=/dev/nvme0n1 00:14:03.674 [job1] 00:14:03.674 filename=/dev/nvme0n2 00:14:03.674 [job2] 00:14:03.674 filename=/dev/nvme0n3 00:14:03.674 [job3] 00:14:03.674 filename=/dev/nvme0n4 00:14:03.674 Could not set queue depth (nvme0n1) 00:14:03.674 Could not set queue depth (nvme0n2) 00:14:03.674 Could not set queue depth (nvme0n3) 00:14:03.674 Could not set queue depth (nvme0n4) 00:14:03.674 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:03.674 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:03.674 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:03.674 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:03.674 fio-3.35 00:14:03.674 Starting 4 threads 00:14:05.048 00:14:05.048 job0: (groupid=0, jobs=1): err= 0: pid=69563: Tue Dec 10 11:17:11 2024 00:14:05.048 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:14:05.048 slat (usec): min=6, max=4170, avg=101.51, stdev=492.76 00:14:05.048 clat (usec): min=8640, max=18576, avg=13730.65, stdev=1972.22 00:14:05.048 lat (usec): min=8651, max=18592, avg=13832.16, stdev=1922.82 00:14:05.048 clat percentiles (usec): 00:14:05.048 | 1.00th=[ 9896], 5.00th=[12125], 10.00th=[12518], 20.00th=[12649], 00:14:05.048 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:14:05.048 | 70.00th=[13435], 80.00th=[15401], 90.00th=[17433], 95.00th=[17957], 00:14:05.048 | 99.00th=[18482], 99.50th=[18482], 99.90th=[18482], 99.95th=[18482], 00:14:05.048 | 99.99th=[18482] 00:14:05.048 write: IOPS=4659, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1003msec); 0 zone resets 00:14:05.048 slat (usec): min=8, max=3765, avg=105.38, stdev=462.58 00:14:05.048 clat (usec): min=273, max=18008, avg=13510.96, stdev=2301.12 00:14:05.048 lat (usec): min=2466, max=18062, avg=13616.34, stdev=2269.40 00:14:05.048 clat percentiles (usec): 00:14:05.048 | 1.00th=[ 5669], 5.00th=[11600], 10.00th=[11863], 20.00th=[12125], 00:14:05.048 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:14:05.048 | 70.00th=[14877], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:14:05.048 | 99.00th=[17695], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:14:05.048 | 99.99th=[17957] 00:14:05.048 bw ( KiB/s): min=16384, max=20521, per=33.35%, avg=18452.50, stdev=2925.30, samples=2 00:14:05.048 iops : min= 4096, max= 5130, avg=4613.00, stdev=731.15, samples=2 00:14:05.048 lat (usec) : 500=0.01% 00:14:05.048 lat (msec) : 4=0.34%, 10=1.44%, 20=98.20% 00:14:05.048 cpu : usr=3.89%, sys=14.07%, ctx=291, majf=0, minf=11 00:14:05.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:05.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:05.048 issued rwts: total=4608,4673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:05.048 job1: (groupid=0, jobs=1): err= 0: pid=69564: Tue Dec 10 11:17:11 2024 00:14:05.048 read: IOPS=2262, BW=9049KiB/s (9266kB/s)(9076KiB/1003msec) 00:14:05.048 slat (usec): min=6, max=7862, avg=209.31, stdev=1025.57 00:14:05.048 clat (usec): min=1071, max=30781, avg=26207.30, stdev=3512.97 00:14:05.048 lat (usec): min=5264, max=30797, avg=26416.61, stdev=3382.28 00:14:05.048 clat percentiles (usec): 00:14:05.048 | 1.00th=[ 5669], 5.00th=[20841], 10.00th=[23200], 20.00th=[26084], 00:14:05.048 | 30.00th=[26346], 40.00th=[26608], 50.00th=[26870], 60.00th=[27132], 00:14:05.048 | 70.00th=[27395], 80.00th=[27919], 90.00th=[28443], 95.00th=[28967], 00:14:05.048 | 99.00th=[30016], 99.50th=[30016], 99.90th=[30802], 99.95th=[30802], 00:14:05.048 | 99.99th=[30802] 00:14:05.048 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:14:05.048 slat (usec): min=11, max=8106, avg=197.56, stdev=904.64 00:14:05.048 clat (usec): min=16459, max=31514, avg=25920.35, stdev=1889.78 00:14:05.048 lat (usec): min=21116, max=31539, avg=26117.91, stdev=1682.99 00:14:05.048 clat percentiles (usec): 00:14:05.048 | 1.00th=[20055], 5.00th=[21890], 10.00th=[24249], 20.00th=[25035], 00:14:05.048 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:14:05.048 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28181], 95.00th=[28705], 00:14:05.048 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31589], 99.95th=[31589], 00:14:05.048 | 99.99th=[31589] 00:14:05.048 bw ( KiB/s): min=10012, max=10488, per=18.52%, avg=10250.00, stdev=336.58, samples=2 00:14:05.048 iops : min= 2503, max= 2622, avg=2562.50, stdev=84.15, samples=2 00:14:05.048 lat (msec) : 2=0.02%, 10=0.66%, 20=1.74%, 50=97.58% 00:14:05.048 cpu : usr=1.90%, sys=7.58%, ctx=330, majf=0, minf=13 00:14:05.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:14:05.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:05.048 issued rwts: total=2269,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:05.048 job2: (groupid=0, jobs=1): err= 0: pid=69565: Tue Dec 10 11:17:11 2024 00:14:05.048 read: IOPS=3705, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1002msec) 00:14:05.048 slat (usec): min=5, max=4818, avg=124.97, stdev=614.00 00:14:05.048 clat (usec): min=437, max=21662, avg=16356.63, stdev=2522.13 00:14:05.048 lat (usec): min=3612, max=21680, avg=16481.60, stdev=2456.52 00:14:05.048 clat percentiles (usec): 00:14:05.048 | 1.00th=[ 7373], 5.00th=[13829], 10.00th=[14353], 20.00th=[14746], 00:14:05.048 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15533], 60.00th=[17695], 00:14:05.048 | 70.00th=[17957], 80.00th=[18482], 90.00th=[19530], 95.00th=[19792], 00:14:05.048 | 99.00th=[21103], 99.50th=[21627], 99.90th=[21627], 99.95th=[21627], 00:14:05.048 | 99.99th=[21627] 00:14:05.048 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:14:05.048 slat (usec): min=8, max=5586, avg=122.68, stdev=560.79 00:14:05.048 clat (usec): min=10565, max=20161, avg=16062.23, stdev=1720.87 00:14:05.048 lat (usec): min=11210, max=21075, avg=16184.91, stdev=1642.47 00:14:05.048 clat percentiles (usec): 00:14:05.048 | 1.00th=[12256], 5.00th=[13566], 10.00th=[13829], 20.00th=[14484], 00:14:05.048 | 30.00th=[15139], 40.00th=[15401], 50.00th=[16057], 60.00th=[16581], 00:14:05.048 | 70.00th=[17171], 80.00th=[17433], 90.00th=[18482], 95.00th=[19006], 00:14:05.048 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:14:05.048 | 99.99th=[20055] 00:14:05.048 bw ( KiB/s): min=16376, max=16424, per=29.64%, avg=16400.00, stdev=33.94, samples=2 00:14:05.048 iops : min= 4094, max= 4106, avg=4100.00, stdev= 8.49, samples=2 00:14:05.048 lat (usec) : 500=0.01% 00:14:05.048 lat (msec) : 4=0.27%, 10=0.55%, 20=96.88%, 50=2.29% 00:14:05.048 cpu : usr=3.40%, sys=12.09%, ctx=245, majf=0, minf=8 00:14:05.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:05.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:05.049 issued rwts: total=3713,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:05.049 job3: (groupid=0, jobs=1): err= 0: pid=69566: Tue Dec 10 11:17:11 2024 00:14:05.049 read: IOPS=2292, BW=9171KiB/s (9391kB/s)(9208KiB/1004msec) 00:14:05.049 slat (usec): min=4, max=9103, avg=211.95, stdev=1066.86 00:14:05.049 clat (usec): min=1476, max=34092, avg=25966.69, stdev=3605.66 00:14:05.049 lat (usec): min=5222, max=34167, avg=26178.64, stdev=3476.63 00:14:05.049 clat percentiles (usec): 00:14:05.049 | 1.00th=[ 7308], 5.00th=[20317], 10.00th=[23200], 20.00th=[24773], 00:14:05.049 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:14:05.049 | 70.00th=[27132], 80.00th=[27657], 90.00th=[28443], 95.00th=[29754], 00:14:05.049 | 99.00th=[32637], 99.50th=[33162], 99.90th=[33817], 99.95th=[34341], 00:14:05.049 | 99.99th=[34341] 00:14:05.049 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:14:05.049 slat (usec): min=12, max=6715, avg=194.21, stdev=904.20 00:14:05.049 clat (usec): min=16165, max=33858, avg=26070.90, stdev=2195.25 00:14:05.049 lat (usec): min=16278, max=33876, avg=26265.10, stdev=2007.38 00:14:05.049 clat percentiles (usec): 00:14:05.049 | 1.00th=[19792], 5.00th=[22152], 10.00th=[24249], 20.00th=[25297], 00:14:05.049 | 30.00th=[25560], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:14:05.049 | 70.00th=[26346], 80.00th=[27919], 90.00th=[28443], 95.00th=[29230], 00:14:05.049 | 99.00th=[33162], 99.50th=[33424], 99.90th=[33817], 99.95th=[33817], 00:14:05.049 | 99.99th=[33817] 00:14:05.049 bw ( KiB/s): min= 9976, max=10504, per=18.51%, avg=10240.00, stdev=373.35, samples=2 00:14:05.049 iops : min= 2494, max= 2626, avg=2560.00, stdev=93.34, samples=2 00:14:05.049 lat (msec) : 2=0.02%, 10=0.66%, 20=2.45%, 50=96.87% 00:14:05.049 cpu : usr=2.39%, sys=5.48%, ctx=324, majf=0, minf=8 00:14:05.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:14:05.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:05.049 issued rwts: total=2302,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:05.049 00:14:05.049 Run status group 0 (all jobs): 00:14:05.049 READ: bw=50.2MiB/s (52.6MB/s), 9049KiB/s-17.9MiB/s (9266kB/s-18.8MB/s), io=50.4MiB (52.8MB), run=1002-1004msec 00:14:05.049 WRITE: bw=54.0MiB/s (56.7MB/s), 9.96MiB/s-18.2MiB/s (10.4MB/s-19.1MB/s), io=54.3MiB (56.9MB), run=1002-1004msec 00:14:05.049 00:14:05.049 Disk stats (read/write): 00:14:05.049 nvme0n1: ios=3633/4096, merge=0/0, ticks=11326/12507, in_queue=23833, util=86.36% 00:14:05.049 nvme0n2: ios=2028/2048, merge=0/0, ticks=12990/11756, in_queue=24746, util=86.14% 00:14:05.049 nvme0n3: ios=3072/3552, merge=0/0, ticks=11437/12508, in_queue=23945, util=88.78% 00:14:05.049 nvme0n4: ios=2048/2077, merge=0/0, ticks=13237/11786, in_queue=25023, util=89.64% 00:14:05.049 11:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:05.049 [global] 00:14:05.049 thread=1 00:14:05.049 invalidate=1 00:14:05.049 rw=randwrite 00:14:05.049 time_based=1 00:14:05.049 runtime=1 00:14:05.049 ioengine=libaio 00:14:05.049 direct=1 00:14:05.049 bs=4096 00:14:05.049 iodepth=128 00:14:05.049 norandommap=0 00:14:05.049 numjobs=1 00:14:05.049 00:14:05.049 verify_dump=1 00:14:05.049 verify_backlog=512 00:14:05.049 verify_state_save=0 00:14:05.049 do_verify=1 00:14:05.049 verify=crc32c-intel 00:14:05.049 [job0] 00:14:05.049 filename=/dev/nvme0n1 00:14:05.049 [job1] 00:14:05.049 filename=/dev/nvme0n2 00:14:05.049 [job2] 00:14:05.049 filename=/dev/nvme0n3 00:14:05.049 [job3] 00:14:05.049 filename=/dev/nvme0n4 00:14:05.049 Could not set queue depth (nvme0n1) 00:14:05.049 Could not set queue depth (nvme0n2) 00:14:05.049 Could not set queue depth (nvme0n3) 00:14:05.049 Could not set queue depth (nvme0n4) 00:14:05.049 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:05.049 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:05.049 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:05.049 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:05.049 fio-3.35 00:14:05.049 Starting 4 threads 00:14:06.425 00:14:06.425 job0: (groupid=0, jobs=1): err= 0: pid=69619: Tue Dec 10 11:17:13 2024 00:14:06.425 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:14:06.425 slat (usec): min=6, max=7123, avg=101.51, stdev=645.78 00:14:06.425 clat (usec): min=8368, max=23169, avg=14237.19, stdev=1508.84 00:14:06.425 lat (usec): min=8379, max=27652, avg=14338.70, stdev=1539.52 00:14:06.425 clat percentiles (usec): 00:14:06.425 | 1.00th=[ 9110], 5.00th=[12780], 10.00th=[13435], 20.00th=[13829], 00:14:06.425 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14222], 60.00th=[14484], 00:14:06.426 | 70.00th=[14615], 80.00th=[14746], 90.00th=[15008], 95.00th=[15270], 00:14:06.426 | 99.00th=[21365], 99.50th=[21890], 99.90th=[22938], 99.95th=[23200], 00:14:06.426 | 99.99th=[23200] 00:14:06.426 write: IOPS=4779, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1004msec); 0 zone resets 00:14:06.426 slat (usec): min=4, max=10128, avg=103.14, stdev=625.21 00:14:06.426 clat (usec): min=719, max=18110, avg=12840.32, stdev=1416.96 00:14:06.426 lat (usec): min=6327, max=18130, avg=12943.46, stdev=1303.86 00:14:06.426 clat percentiles (usec): 00:14:06.426 | 1.00th=[ 7111], 5.00th=[11076], 10.00th=[11600], 20.00th=[12125], 00:14:06.426 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13173], 00:14:06.426 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13829], 95.00th=[14222], 00:14:06.426 | 99.00th=[17695], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:14:06.426 | 99.99th=[18220] 00:14:06.426 bw ( KiB/s): min=16952, max=20480, per=26.75%, avg=18716.00, stdev=2494.67, samples=2 00:14:06.426 iops : min= 4238, max= 5120, avg=4679.00, stdev=623.67, samples=2 00:14:06.426 lat (usec) : 750=0.01% 00:14:06.426 lat (msec) : 10=3.66%, 20=95.56%, 50=0.78% 00:14:06.426 cpu : usr=3.99%, sys=12.86%, ctx=203, majf=0, minf=16 00:14:06.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:06.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:06.426 issued rwts: total=4608,4799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:06.426 job1: (groupid=0, jobs=1): err= 0: pid=69620: Tue Dec 10 11:17:13 2024 00:14:06.426 read: IOPS=4501, BW=17.6MiB/s (18.4MB/s)(17.6MiB/1003msec) 00:14:06.426 slat (usec): min=9, max=4175, avg=109.01, stdev=441.65 00:14:06.426 clat (usec): min=704, max=18003, avg=13818.13, stdev=1580.76 00:14:06.426 lat (usec): min=2669, max=18090, avg=13927.14, stdev=1615.90 00:14:06.426 clat percentiles (usec): 00:14:06.426 | 1.00th=[ 7177], 5.00th=[11600], 10.00th=[12387], 20.00th=[13435], 00:14:06.426 | 30.00th=[13698], 40.00th=[13829], 50.00th=[13829], 60.00th=[13960], 00:14:06.426 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15664], 95.00th=[16057], 00:14:06.426 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[17957], 00:14:06.426 | 99.99th=[17957] 00:14:06.426 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:14:06.426 slat (usec): min=12, max=3817, avg=103.38, stdev=421.29 00:14:06.426 clat (usec): min=10391, max=17998, avg=13949.74, stdev=1134.08 00:14:06.426 lat (usec): min=10410, max=18020, avg=14053.12, stdev=1187.22 00:14:06.426 clat percentiles (usec): 00:14:06.426 | 1.00th=[11207], 5.00th=[12649], 10.00th=[12780], 20.00th=[13173], 00:14:06.426 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13698], 60.00th=[14091], 00:14:06.426 | 70.00th=[14222], 80.00th=[14484], 90.00th=[15270], 95.00th=[16450], 00:14:06.426 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:14:06.426 | 99.99th=[17957] 00:14:06.426 bw ( KiB/s): min=17648, max=19216, per=26.34%, avg=18432.00, stdev=1108.74, samples=2 00:14:06.426 iops : min= 4412, max= 4804, avg=4608.00, stdev=277.19, samples=2 00:14:06.426 lat (usec) : 750=0.01% 00:14:06.426 lat (msec) : 4=0.22%, 10=0.70%, 20=99.07% 00:14:06.426 cpu : usr=3.29%, sys=11.78%, ctx=527, majf=0, minf=5 00:14:06.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:06.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:06.426 issued rwts: total=4515,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:06.426 job2: (groupid=0, jobs=1): err= 0: pid=69621: Tue Dec 10 11:17:13 2024 00:14:06.426 read: IOPS=4018, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1006msec) 00:14:06.426 slat (usec): min=6, max=8112, avg=118.96, stdev=767.90 00:14:06.426 clat (usec): min=1959, max=25921, avg=16380.03, stdev=2013.10 00:14:06.426 lat (usec): min=8438, max=30804, avg=16498.99, stdev=2035.78 00:14:06.426 clat percentiles (usec): 00:14:06.426 | 1.00th=[ 9110], 5.00th=[11207], 10.00th=[15401], 20.00th=[15926], 00:14:06.426 | 30.00th=[16188], 40.00th=[16319], 50.00th=[16450], 60.00th=[16712], 00:14:06.426 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:14:06.426 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25822], 99.95th=[25822], 00:14:06.426 | 99.99th=[25822] 00:14:06.426 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:14:06.426 slat (usec): min=7, max=11671, avg=118.08, stdev=730.36 00:14:06.426 clat (usec): min=8037, max=21353, avg=14927.88, stdev=1415.79 00:14:06.426 lat (usec): min=10472, max=21378, avg=15045.97, stdev=1258.07 00:14:06.426 clat percentiles (usec): 00:14:06.426 | 1.00th=[ 9503], 5.00th=[13304], 10.00th=[13566], 20.00th=[14091], 00:14:06.426 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15139], 60.00th=[15270], 00:14:06.426 | 70.00th=[15401], 80.00th=[15664], 90.00th=[15795], 95.00th=[16057], 00:14:06.426 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21365], 99.95th=[21365], 00:14:06.426 | 99.99th=[21365] 00:14:06.426 bw ( KiB/s): min=16384, max=16384, per=23.41%, avg=16384.00, stdev= 0.00, samples=2 00:14:06.426 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:14:06.426 lat (msec) : 2=0.01%, 10=1.62%, 20=96.54%, 50=1.83% 00:14:06.426 cpu : usr=3.58%, sys=12.34%, ctx=181, majf=0, minf=11 00:14:06.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:06.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:06.426 issued rwts: total=4043,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:06.426 job3: (groupid=0, jobs=1): err= 0: pid=69622: Tue Dec 10 11:17:13 2024 00:14:06.426 read: IOPS=3861, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1003msec) 00:14:06.426 slat (usec): min=6, max=5724, avg=124.36, stdev=568.92 00:14:06.426 clat (usec): min=1102, max=19516, avg=16215.83, stdev=1620.96 00:14:06.426 lat (usec): min=4043, max=19578, avg=16340.19, stdev=1525.50 00:14:06.426 clat percentiles (usec): 00:14:06.426 | 1.00th=[ 8029], 5.00th=[13566], 10.00th=[15926], 20.00th=[16057], 00:14:06.426 | 30.00th=[16188], 40.00th=[16188], 50.00th=[16319], 60.00th=[16450], 00:14:06.426 | 70.00th=[16581], 80.00th=[16712], 90.00th=[17171], 95.00th=[18220], 00:14:06.426 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19530], 99.95th=[19530], 00:14:06.426 | 99.99th=[19530] 00:14:06.426 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:14:06.426 slat (usec): min=9, max=4145, avg=118.32, stdev=531.56 00:14:06.426 clat (usec): min=11824, max=17230, avg=15558.17, stdev=658.01 00:14:06.426 lat (usec): min=12508, max=18750, avg=15676.49, stdev=465.68 00:14:06.426 clat percentiles (usec): 00:14:06.426 | 1.00th=[12387], 5.00th=[15008], 10.00th=[15139], 20.00th=[15270], 00:14:06.426 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15533], 60.00th=[15664], 00:14:06.426 | 70.00th=[15795], 80.00th=[15926], 90.00th=[16188], 95.00th=[16319], 00:14:06.426 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17171], 99.95th=[17171], 00:14:06.426 | 99.99th=[17171] 00:14:06.426 bw ( KiB/s): min=16384, max=16384, per=23.41%, avg=16384.00, stdev= 0.00, samples=2 00:14:06.426 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:14:06.426 lat (msec) : 2=0.01%, 10=0.80%, 20=99.18% 00:14:06.426 cpu : usr=3.99%, sys=11.28%, ctx=326, majf=0, minf=15 00:14:06.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:06.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:06.426 issued rwts: total=3873,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:06.426 00:14:06.426 Run status group 0 (all jobs): 00:14:06.426 READ: bw=66.2MiB/s (69.4MB/s), 15.1MiB/s-17.9MiB/s (15.8MB/s-18.8MB/s), io=66.6MiB (69.8MB), run=1003-1006msec 00:14:06.426 WRITE: bw=68.3MiB/s (71.7MB/s), 15.9MiB/s-18.7MiB/s (16.7MB/s-19.6MB/s), io=68.7MiB (72.1MB), run=1003-1006msec 00:14:06.426 00:14:06.426 Disk stats (read/write): 00:14:06.426 nvme0n1: ios=3956/4096, merge=0/0, ticks=52461/49209, in_queue=101670, util=88.18% 00:14:06.426 nvme0n2: ios=3738/4096, merge=0/0, ticks=16721/16985, in_queue=33706, util=88.35% 00:14:06.426 nvme0n3: ios=3272/3584, merge=0/0, ticks=51108/49815, in_queue=100923, util=89.00% 00:14:06.426 nvme0n4: ios=3201/3584, merge=0/0, ticks=12292/12123, in_queue=24415, util=89.66% 00:14:06.426 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:06.426 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=69635 00:14:06.426 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:06.426 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:06.426 [global] 00:14:06.426 thread=1 00:14:06.426 invalidate=1 00:14:06.426 rw=read 00:14:06.426 time_based=1 00:14:06.426 runtime=10 00:14:06.426 ioengine=libaio 00:14:06.426 direct=1 00:14:06.426 bs=4096 00:14:06.426 iodepth=1 00:14:06.426 norandommap=1 00:14:06.426 numjobs=1 00:14:06.426 00:14:06.426 [job0] 00:14:06.426 filename=/dev/nvme0n1 00:14:06.426 [job1] 00:14:06.426 filename=/dev/nvme0n2 00:14:06.426 [job2] 00:14:06.426 filename=/dev/nvme0n3 00:14:06.426 [job3] 00:14:06.426 filename=/dev/nvme0n4 00:14:06.426 Could not set queue depth (nvme0n1) 00:14:06.426 Could not set queue depth (nvme0n2) 00:14:06.426 Could not set queue depth (nvme0n3) 00:14:06.426 Could not set queue depth (nvme0n4) 00:14:06.426 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:06.426 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:06.426 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:06.426 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:06.426 fio-3.35 00:14:06.426 Starting 4 threads 00:14:09.782 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:09.782 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=43753472, buflen=4096 00:14:09.782 fio: pid=69684, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:09.782 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:10.040 fio: pid=69683, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:10.040 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=57110528, buflen=4096 00:14:10.040 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:10.040 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:10.298 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=59768832, buflen=4096 00:14:10.298 fio: pid=69681, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:10.556 11:17:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:10.556 11:17:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:10.814 fio: pid=69682, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:10.814 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=11350016, buflen=4096 00:14:11.073 00:14:11.073 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69681: Tue Dec 10 11:17:17 2024 00:14:11.073 read: IOPS=3962, BW=15.5MiB/s (16.2MB/s)(57.0MiB/3683msec) 00:14:11.073 slat (usec): min=11, max=12866, avg=20.62, stdev=156.55 00:14:11.073 clat (usec): min=160, max=4510, avg=229.67, stdev=84.82 00:14:11.073 lat (usec): min=173, max=13389, avg=250.29, stdev=181.69 00:14:11.073 clat percentiles (usec): 00:14:11.073 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:14:11.073 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 212], 00:14:11.073 | 70.00th=[ 233], 80.00th=[ 269], 90.00th=[ 306], 95.00th=[ 392], 00:14:11.073 | 99.00th=[ 441], 99.50th=[ 474], 99.90th=[ 840], 99.95th=[ 1352], 00:14:11.073 | 99.99th=[ 3064] 00:14:11.073 bw ( KiB/s): min=10504, max=18984, per=29.04%, avg=15955.43, stdev=2894.19, samples=7 00:14:11.073 iops : min= 2626, max= 4746, avg=3988.71, stdev=723.49, samples=7 00:14:11.073 lat (usec) : 250=74.93%, 500=24.68%, 750=0.26%, 1000=0.06% 00:14:11.073 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01% 00:14:11.073 cpu : usr=1.52%, sys=6.65%, ctx=14599, majf=0, minf=1 00:14:11.073 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:11.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.073 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.073 issued rwts: total=14593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.073 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:11.073 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69682: Tue Dec 10 11:17:17 2024 00:14:11.073 read: IOPS=4507, BW=17.6MiB/s (18.5MB/s)(74.8MiB/4250msec) 00:14:11.073 slat (usec): min=11, max=10883, avg=17.32, stdev=148.70 00:14:11.073 clat (usec): min=158, max=2209, avg=202.86, stdev=46.11 00:14:11.073 lat (usec): min=170, max=11122, avg=220.18, stdev=156.46 00:14:11.073 clat percentiles (usec): 00:14:11.073 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:14:11.073 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:14:11.073 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 255], 95.00th=[ 277], 00:14:11.073 | 99.00th=[ 314], 99.50th=[ 343], 99.90th=[ 578], 99.95th=[ 824], 00:14:11.073 | 99.99th=[ 1745] 00:14:11.073 bw ( KiB/s): min=15808, max=19663, per=32.67%, avg=17946.38, stdev=1376.79, samples=8 00:14:11.073 iops : min= 3952, max= 4915, avg=4486.50, stdev=344.06, samples=8 00:14:11.073 lat (usec) : 250=89.02%, 500=10.84%, 750=0.08%, 1000=0.01% 00:14:11.073 lat (msec) : 2=0.04%, 4=0.01% 00:14:11.073 cpu : usr=1.65%, sys=5.91%, ctx=19166, majf=0, minf=2 00:14:11.073 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:11.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.073 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.074 issued rwts: total=19156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:11.074 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69683: Tue Dec 10 11:17:17 2024 00:14:11.074 read: IOPS=4101, BW=16.0MiB/s (16.8MB/s)(54.5MiB/3400msec) 00:14:11.074 slat (usec): min=11, max=14557, avg=17.47, stdev=139.51 00:14:11.074 clat (usec): min=181, max=3250, avg=224.42, stdev=62.76 00:14:11.074 lat (usec): min=193, max=14820, avg=241.89, stdev=153.81 00:14:11.074 clat percentiles (usec): 00:14:11.074 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:14:11.074 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:14:11.074 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 269], 95.00th=[ 322], 00:14:11.074 | 99.00th=[ 359], 99.50th=[ 371], 99.90th=[ 619], 99.95th=[ 1057], 00:14:11.074 | 99.99th=[ 3130] 00:14:11.074 bw ( KiB/s): min=15320, max=17216, per=30.12%, avg=16549.33, stdev=695.84, samples=6 00:14:11.074 iops : min= 3830, max= 4304, avg=4137.33, stdev=173.96, samples=6 00:14:11.074 lat (usec) : 250=87.69%, 500=12.18%, 750=0.05%, 1000=0.03% 00:14:11.074 lat (msec) : 2=0.01%, 4=0.04% 00:14:11.074 cpu : usr=1.62%, sys=5.85%, ctx=13948, majf=0, minf=2 00:14:11.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:11.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.074 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.074 issued rwts: total=13944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:11.074 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69684: Tue Dec 10 11:17:17 2024 00:14:11.074 read: IOPS=3609, BW=14.1MiB/s (14.8MB/s)(41.7MiB/2960msec) 00:14:11.074 slat (usec): min=11, max=1439, avg=19.44, stdev=15.51 00:14:11.074 clat (usec): min=178, max=2451, avg=255.34, stdev=93.06 00:14:11.074 lat (usec): min=191, max=2471, avg=274.78, stdev=98.48 00:14:11.074 clat percentiles (usec): 00:14:11.074 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:14:11.074 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:14:11.074 | 70.00th=[ 235], 80.00th=[ 297], 90.00th=[ 392], 95.00th=[ 490], 00:14:11.074 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[ 644], 99.95th=[ 840], 00:14:11.074 | 99.99th=[ 1532] 00:14:11.074 bw ( KiB/s): min=13816, max=17352, per=28.54%, avg=15678.40, stdev=1435.52, samples=5 00:14:11.074 iops : min= 3454, max= 4338, avg=3919.60, stdev=358.88, samples=5 00:14:11.074 lat (usec) : 250=75.77%, 500=20.27%, 750=3.89%, 1000=0.05% 00:14:11.074 lat (msec) : 2=0.01%, 4=0.01% 00:14:11.074 cpu : usr=1.82%, sys=6.18%, ctx=10684, majf=0, minf=1 00:14:11.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:11.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.074 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.074 issued rwts: total=10683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:11.074 00:14:11.074 Run status group 0 (all jobs): 00:14:11.074 READ: bw=53.7MiB/s (56.3MB/s), 14.1MiB/s-17.6MiB/s (14.8MB/s-18.5MB/s), io=228MiB (239MB), run=2960-4250msec 00:14:11.074 00:14:11.074 Disk stats (read/write): 00:14:11.074 nvme0n1: ios=14370/0, merge=0/0, ticks=3332/0, in_queue=3332, util=95.65% 00:14:11.074 nvme0n2: ios=18491/0, merge=0/0, ticks=3793/0, in_queue=3793, util=96.18% 00:14:11.074 nvme0n3: ios=13823/0, merge=0/0, ticks=3133/0, in_queue=3133, util=96.34% 00:14:11.074 nvme0n4: ios=10497/0, merge=0/0, ticks=2694/0, in_queue=2694, util=96.76% 00:14:11.074 11:17:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:11.074 11:17:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:11.640 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:11.640 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:12.207 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:12.207 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:12.465 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:12.465 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:13.032 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:13.032 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:13.599 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:13.599 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 69635 00:14:13.599 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:13.599 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:13.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.599 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:13.599 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:14:13.599 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:13.600 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.600 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:13.600 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.600 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:14:13.600 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:13.600 nvmf hotplug test: fio failed as expected 00:14:13.600 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:13.600 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.858 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:13.858 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:13.858 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:13.858 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:13.858 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:13.858 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:13.858 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:14:13.858 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:13.858 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:14:13.858 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:13.858 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:13.858 rmmod nvme_tcp 00:14:13.858 rmmod nvme_fabrics 00:14:13.858 rmmod nvme_keyring 00:14:14.116 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:14.116 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:14:14.116 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:14:14.116 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 69241 ']' 00:14:14.116 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 69241 00:14:14.116 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 69241 ']' 00:14:14.116 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 69241 00:14:14.116 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:14:14.116 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:14.116 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69241 00:14:14.116 killing process with pid 69241 00:14:14.116 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:14.116 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:14.116 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69241' 00:14:14.116 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 69241 00:14:14.116 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 69241 00:14:15.051 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:15.051 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:15.051 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:15.051 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:14:15.051 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:14:15.051 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:15.051 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:15.051 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:15.051 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:15.051 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:15.051 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:15.051 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:15.051 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:15.310 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:15.310 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:15.310 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:15.310 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:15.310 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:15.310 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:15.310 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:15.310 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:15.310 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:15.310 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:15.310 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.310 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.310 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.310 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:14:15.310 ************************************ 00:14:15.310 END TEST nvmf_fio_target 00:14:15.310 ************************************ 00:14:15.310 00:14:15.310 real 0m24.048s 00:14:15.310 user 1m30.282s 00:14:15.310 sys 0m11.070s 00:14:15.310 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.310 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.310 11:17:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:15.310 11:17:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:15.310 11:17:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.310 11:17:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:15.310 ************************************ 00:14:15.310 START TEST nvmf_bdevio 00:14:15.310 ************************************ 00:14:15.310 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:15.569 * Looking for test storage... 00:14:15.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:15.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.569 --rc genhtml_branch_coverage=1 00:14:15.569 --rc genhtml_function_coverage=1 00:14:15.569 --rc genhtml_legend=1 00:14:15.569 --rc geninfo_all_blocks=1 00:14:15.569 --rc geninfo_unexecuted_blocks=1 00:14:15.569 00:14:15.569 ' 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:15.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.569 --rc genhtml_branch_coverage=1 00:14:15.569 --rc genhtml_function_coverage=1 00:14:15.569 --rc genhtml_legend=1 00:14:15.569 --rc geninfo_all_blocks=1 00:14:15.569 --rc geninfo_unexecuted_blocks=1 00:14:15.569 00:14:15.569 ' 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:15.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.569 --rc genhtml_branch_coverage=1 00:14:15.569 --rc genhtml_function_coverage=1 00:14:15.569 --rc genhtml_legend=1 00:14:15.569 --rc geninfo_all_blocks=1 00:14:15.569 --rc geninfo_unexecuted_blocks=1 00:14:15.569 00:14:15.569 ' 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:15.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.569 --rc genhtml_branch_coverage=1 00:14:15.569 --rc genhtml_function_coverage=1 00:14:15.569 --rc genhtml_legend=1 00:14:15.569 --rc geninfo_all_blocks=1 00:14:15.569 --rc geninfo_unexecuted_blocks=1 00:14:15.569 00:14:15.569 ' 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:15.569 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:15.570 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:15.570 Cannot find device "nvmf_init_br" 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:15.570 Cannot find device "nvmf_init_br2" 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:15.570 Cannot find device "nvmf_tgt_br" 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:15.570 Cannot find device "nvmf_tgt_br2" 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:15.570 Cannot find device "nvmf_init_br" 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:15.570 Cannot find device "nvmf_init_br2" 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:15.570 Cannot find device "nvmf_tgt_br" 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:15.570 Cannot find device "nvmf_tgt_br2" 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:15.570 Cannot find device "nvmf_br" 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:14:15.570 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:15.828 Cannot find device "nvmf_init_if" 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:15.828 Cannot find device "nvmf_init_if2" 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:15.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:15.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:15.828 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:15.828 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:14:15.828 00:14:15.828 --- 10.0.0.3 ping statistics --- 00:14:15.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.828 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:15.828 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:15.828 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:14:15.828 00:14:15.828 --- 10.0.0.4 ping statistics --- 00:14:15.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.828 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:15.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:15.828 00:14:15.828 --- 10.0.0.1 ping statistics --- 00:14:15.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.828 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:15.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:14:15.828 00:14:15.828 --- 10.0.0.2 ping statistics --- 00:14:15.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.828 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.828 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=70032 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 70032 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 70032 ']' 00:14:16.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.087 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:16.087 [2024-12-10 11:17:22.793508] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:14:16.087 [2024-12-10 11:17:22.793877] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.345 [2024-12-10 11:17:22.971050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:16.345 [2024-12-10 11:17:23.078450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.345 [2024-12-10 11:17:23.078916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.345 [2024-12-10 11:17:23.078952] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.345 [2024-12-10 11:17:23.078967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.345 [2024-12-10 11:17:23.078984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.345 [2024-12-10 11:17:23.080978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:16.345 [2024-12-10 11:17:23.081078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:16.345 [2024-12-10 11:17:23.081142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:16.345 [2024-12-10 11:17:23.081487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:16.604 [2024-12-10 11:17:23.269130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:17.170 [2024-12-10 11:17:23.860380] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:17.170 Malloc0 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.170 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:17.170 [2024-12-10 11:17:23.990091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:17.429 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.429 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:17.429 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:17.429 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:14:17.429 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:14:17.429 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:17.429 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:17.429 { 00:14:17.429 "params": { 00:14:17.429 "name": "Nvme$subsystem", 00:14:17.429 "trtype": "$TEST_TRANSPORT", 00:14:17.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:17.429 "adrfam": "ipv4", 00:14:17.429 "trsvcid": "$NVMF_PORT", 00:14:17.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:17.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:17.429 "hdgst": ${hdgst:-false}, 00:14:17.429 "ddgst": ${ddgst:-false} 00:14:17.429 }, 00:14:17.429 "method": "bdev_nvme_attach_controller" 00:14:17.429 } 00:14:17.429 EOF 00:14:17.429 )") 00:14:17.429 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:14:17.429 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:14:17.429 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:14:17.429 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:17.429 "params": { 00:14:17.429 "name": "Nvme1", 00:14:17.429 "trtype": "tcp", 00:14:17.429 "traddr": "10.0.0.3", 00:14:17.429 "adrfam": "ipv4", 00:14:17.429 "trsvcid": "4420", 00:14:17.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:17.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:17.429 "hdgst": false, 00:14:17.429 "ddgst": false 00:14:17.429 }, 00:14:17.429 "method": "bdev_nvme_attach_controller" 00:14:17.429 }' 00:14:17.429 [2024-12-10 11:17:24.121791] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:14:17.429 [2024-12-10 11:17:24.122182] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70068 ] 00:14:17.688 [2024-12-10 11:17:24.314834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:17.688 [2024-12-10 11:17:24.427814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.688 [2024-12-10 11:17:24.427891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.688 [2024-12-10 11:17:24.427895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.945 [2024-12-10 11:17:24.652967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.203 I/O targets: 00:14:18.203 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:18.203 00:14:18.203 00:14:18.203 CUnit - A unit testing framework for C - Version 2.1-3 00:14:18.203 http://cunit.sourceforge.net/ 00:14:18.203 00:14:18.203 00:14:18.203 Suite: bdevio tests on: Nvme1n1 00:14:18.203 Test: blockdev write read block ...passed 00:14:18.203 Test: blockdev write zeroes read block ...passed 00:14:18.203 Test: blockdev write zeroes read no split ...passed 00:14:18.203 Test: blockdev write zeroes read split ...passed 00:14:18.203 Test: blockdev write zeroes read split partial ...passed 00:14:18.203 Test: blockdev reset ...[2024-12-10 11:17:24.952112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:18.203 [2024-12-10 11:17:24.952537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:14:18.203 [2024-12-10 11:17:24.967579] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:18.203 passed 00:14:18.203 Test: blockdev write read 8 blocks ...passed 00:14:18.203 Test: blockdev write read size > 128k ...passed 00:14:18.203 Test: blockdev write read invalid size ...passed 00:14:18.203 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:18.203 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:18.203 Test: blockdev write read max offset ...passed 00:14:18.203 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:18.203 Test: blockdev writev readv 8 blocks ...passed 00:14:18.203 Test: blockdev writev readv 30 x 1block ...passed 00:14:18.203 Test: blockdev writev readv block ...passed 00:14:18.203 Test: blockdev writev readv size > 128k ...passed 00:14:18.203 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:18.203 Test: blockdev comparev and writev ...[2024-12-10 11:17:24.981371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.203 [2024-12-10 11:17:24.981458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:18.203 [2024-12-10 11:17:24.981492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.203 [2024-12-10 11:17:24.981514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:18.203 [2024-12-10 11:17:24.981936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.203 [2024-12-10 11:17:24.981998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:18.203 [2024-12-10 11:17:24.982044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.203 [2024-12-10 11:17:24.982097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:18.203 [2024-12-10 11:17:24.982532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.203 [2024-12-10 11:17:24.982709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:18.203 [2024-12-10 11:17:24.982749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.203 [2024-12-10 11:17:24.982773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:18.203 [2024-12-10 11:17:24.983251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.203 [2024-12-10 11:17:24.983297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:18.203 [2024-12-10 11:17:24.983326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.203 [2024-12-10 11:17:24.983345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:18.203 passed 00:14:18.203 Test: blockdev nvme passthru rw ...passed 00:14:18.203 Test: blockdev nvme passthru vendor specific ...[2024-12-10 11:17:24.984556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:18.203 [2024-12-10 11:17:24.984617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:18.203 [2024-12-10 11:17:24.984779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:18.203 [2024-12-10 11:17:24.984818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:18.203 passed 00:14:18.203 Test: blockdev nvme admin passthru ...[2024-12-10 11:17:24.984965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:18.204 [2024-12-10 11:17:24.985000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:18.204 [2024-12-10 11:17:24.985144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:18.204 [2024-12-10 11:17:24.985173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:18.204 passed 00:14:18.204 Test: blockdev copy ...passed 00:14:18.204 00:14:18.204 Run Summary: Type Total Ran Passed Failed Inactive 00:14:18.204 suites 1 1 n/a 0 0 00:14:18.204 tests 23 23 23 0 0 00:14:18.204 asserts 152 152 152 0 n/a 00:14:18.204 00:14:18.204 Elapsed time = 0.369 seconds 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:19.580 rmmod nvme_tcp 00:14:19.580 rmmod nvme_fabrics 00:14:19.580 rmmod nvme_keyring 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 70032 ']' 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 70032 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 70032 ']' 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 70032 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70032 00:14:19.580 killing process with pid 70032 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70032' 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 70032 00:14:19.580 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 70032 00:14:20.957 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:20.957 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:20.957 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:20.957 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:14:20.957 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:14:20.957 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:20.957 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:14:20.957 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:20.957 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:14:20.958 00:14:20.958 real 0m5.574s 00:14:20.958 user 0m21.095s 00:14:20.958 sys 0m1.055s 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:20.958 ************************************ 00:14:20.958 END TEST nvmf_bdevio 00:14:20.958 ************************************ 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:20.958 00:14:20.958 real 3m3.417s 00:14:20.958 user 8m15.400s 00:14:20.958 sys 0m53.935s 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:20.958 ************************************ 00:14:20.958 END TEST nvmf_target_core 00:14:20.958 ************************************ 00:14:20.958 11:17:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:20.958 11:17:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:20.958 11:17:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.958 11:17:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:20.958 ************************************ 00:14:20.958 START TEST nvmf_target_extra 00:14:20.958 ************************************ 00:14:20.958 11:17:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:21.217 * Looking for test storage... 00:14:21.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:21.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.217 --rc genhtml_branch_coverage=1 00:14:21.217 --rc genhtml_function_coverage=1 00:14:21.217 --rc genhtml_legend=1 00:14:21.217 --rc geninfo_all_blocks=1 00:14:21.217 --rc geninfo_unexecuted_blocks=1 00:14:21.217 00:14:21.217 ' 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:21.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.217 --rc genhtml_branch_coverage=1 00:14:21.217 --rc genhtml_function_coverage=1 00:14:21.217 --rc genhtml_legend=1 00:14:21.217 --rc geninfo_all_blocks=1 00:14:21.217 --rc geninfo_unexecuted_blocks=1 00:14:21.217 00:14:21.217 ' 00:14:21.217 11:17:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:21.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.217 --rc genhtml_branch_coverage=1 00:14:21.217 --rc genhtml_function_coverage=1 00:14:21.218 --rc genhtml_legend=1 00:14:21.218 --rc geninfo_all_blocks=1 00:14:21.218 --rc geninfo_unexecuted_blocks=1 00:14:21.218 00:14:21.218 ' 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:21.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.218 --rc genhtml_branch_coverage=1 00:14:21.218 --rc genhtml_function_coverage=1 00:14:21.218 --rc genhtml_legend=1 00:14:21.218 --rc geninfo_all_blocks=1 00:14:21.218 --rc geninfo_unexecuted_blocks=1 00:14:21.218 00:14:21.218 ' 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:21.218 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:21.218 ************************************ 00:14:21.218 START TEST nvmf_auth_target 00:14:21.218 ************************************ 00:14:21.218 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:21.478 * Looking for test storage... 00:14:21.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:21.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.478 --rc genhtml_branch_coverage=1 00:14:21.478 --rc genhtml_function_coverage=1 00:14:21.478 --rc genhtml_legend=1 00:14:21.478 --rc geninfo_all_blocks=1 00:14:21.478 --rc geninfo_unexecuted_blocks=1 00:14:21.478 00:14:21.478 ' 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:21.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.478 --rc genhtml_branch_coverage=1 00:14:21.478 --rc genhtml_function_coverage=1 00:14:21.478 --rc genhtml_legend=1 00:14:21.478 --rc geninfo_all_blocks=1 00:14:21.478 --rc geninfo_unexecuted_blocks=1 00:14:21.478 00:14:21.478 ' 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:21.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.478 --rc genhtml_branch_coverage=1 00:14:21.478 --rc genhtml_function_coverage=1 00:14:21.478 --rc genhtml_legend=1 00:14:21.478 --rc geninfo_all_blocks=1 00:14:21.478 --rc geninfo_unexecuted_blocks=1 00:14:21.478 00:14:21.478 ' 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:21.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.478 --rc genhtml_branch_coverage=1 00:14:21.478 --rc genhtml_function_coverage=1 00:14:21.478 --rc genhtml_legend=1 00:14:21.478 --rc geninfo_all_blocks=1 00:14:21.478 --rc geninfo_unexecuted_blocks=1 00:14:21.478 00:14:21.478 ' 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.478 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:21.479 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:21.479 Cannot find device "nvmf_init_br" 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:21.479 Cannot find device "nvmf_init_br2" 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:21.479 Cannot find device "nvmf_tgt_br" 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:21.479 Cannot find device "nvmf_tgt_br2" 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:21.479 Cannot find device "nvmf_init_br" 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:21.479 Cannot find device "nvmf_init_br2" 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:21.479 Cannot find device "nvmf_tgt_br" 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:21.479 Cannot find device "nvmf_tgt_br2" 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:14:21.479 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:21.738 Cannot find device "nvmf_br" 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:21.738 Cannot find device "nvmf_init_if" 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:21.738 Cannot find device "nvmf_init_if2" 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:21.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:21.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:21.738 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:21.739 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:21.739 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:21.739 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:21.739 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:21.739 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:21.739 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:21.739 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:21.739 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:21.739 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:21.739 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:21.739 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:21.739 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:21.739 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:21.739 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:21.739 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:14:21.739 00:14:21.739 --- 10.0.0.3 ping statistics --- 00:14:21.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.739 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:14:21.739 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:21.739 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:21.739 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:14:21.739 00:14:21.739 --- 10.0.0.4 ping statistics --- 00:14:21.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.739 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:21.739 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:21.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:21.998 00:14:21.998 --- 10.0.0.1 ping statistics --- 00:14:21.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.998 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:21.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:14:21.998 00:14:21.998 --- 10.0.0.2 ping statistics --- 00:14:21.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.998 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70409 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70409 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70409 ']' 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.998 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.935 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.935 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:22.935 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:22.935 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:22.935 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=70441 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=70e2e9eaa691bb0acf73c73fb2f26b51cc7c7d224ff3ef8d 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wnx 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 70e2e9eaa691bb0acf73c73fb2f26b51cc7c7d224ff3ef8d 0 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 70e2e9eaa691bb0acf73c73fb2f26b51cc7c7d224ff3ef8d 0 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=70e2e9eaa691bb0acf73c73fb2f26b51cc7c7d224ff3ef8d 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wnx 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wnx 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.wnx 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=604846e5dccafa525f0b1afdc06b590cd809516d9bad238fdd2ea4fc87b7aa21 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.I3C 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 604846e5dccafa525f0b1afdc06b590cd809516d9bad238fdd2ea4fc87b7aa21 3 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 604846e5dccafa525f0b1afdc06b590cd809516d9bad238fdd2ea4fc87b7aa21 3 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=604846e5dccafa525f0b1afdc06b590cd809516d9bad238fdd2ea4fc87b7aa21 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.I3C 00:14:23.194 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.I3C 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.I3C 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cd1078732c090af862c6f67d0d7a8b46 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.63t 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cd1078732c090af862c6f67d0d7a8b46 1 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cd1078732c090af862c6f67d0d7a8b46 1 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cd1078732c090af862c6f67d0d7a8b46 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.63t 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.63t 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.63t 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:23.195 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:23.195 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9ef26fbdcada3202da4e7db637138706b8080b02764cd130 00:14:23.195 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:23.195 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.kI5 00:14:23.195 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9ef26fbdcada3202da4e7db637138706b8080b02764cd130 2 00:14:23.195 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9ef26fbdcada3202da4e7db637138706b8080b02764cd130 2 00:14:23.195 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:23.195 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:23.195 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9ef26fbdcada3202da4e7db637138706b8080b02764cd130 00:14:23.195 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:23.195 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.kI5 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.kI5 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.kI5 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6de11725f2ae9ae315e44ba8a84949df0da673356110d173 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Vl2 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6de11725f2ae9ae315e44ba8a84949df0da673356110d173 2 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6de11725f2ae9ae315e44ba8a84949df0da673356110d173 2 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6de11725f2ae9ae315e44ba8a84949df0da673356110d173 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Vl2 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Vl2 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Vl2 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c55888ef356b943791ae9846ce1e2696 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.doO 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c55888ef356b943791ae9846ce1e2696 1 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c55888ef356b943791ae9846ce1e2696 1 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c55888ef356b943791ae9846ce1e2696 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.doO 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.doO 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.doO 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:23.454 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b8feb28dfc3b30e704edb18328214c2886ffafa8ecf7a774b8ad67103f5c1172 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.921 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b8feb28dfc3b30e704edb18328214c2886ffafa8ecf7a774b8ad67103f5c1172 3 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b8feb28dfc3b30e704edb18328214c2886ffafa8ecf7a774b8ad67103f5c1172 3 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b8feb28dfc3b30e704edb18328214c2886ffafa8ecf7a774b8ad67103f5c1172 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.921 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.921 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.921 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 70409 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70409 ']' 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.455 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:24.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 70441 /var/tmp/host.sock 00:14:24.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70441 ']' 00:14:24.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:24.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:24.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:24.022 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:24.022 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:24.022 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.280 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.280 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:24.280 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:24.280 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.280 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.280 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.281 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:24.281 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wnx 00:14:24.281 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.281 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.281 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.281 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.wnx 00:14:24.281 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.wnx 00:14:24.848 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.I3C ]] 00:14:24.848 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.I3C 00:14:24.848 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.848 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.848 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.848 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.I3C 00:14:24.848 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.I3C 00:14:24.848 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:24.848 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.63t 00:14:24.848 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.848 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.848 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.848 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.63t 00:14:24.848 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.63t 00:14:25.416 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.kI5 ]] 00:14:25.416 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kI5 00:14:25.416 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.416 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.417 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.417 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kI5 00:14:25.417 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kI5 00:14:25.417 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:25.417 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Vl2 00:14:25.417 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.417 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.417 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.417 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Vl2 00:14:25.417 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Vl2 00:14:25.734 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.doO ]] 00:14:25.734 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.doO 00:14:25.734 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.734 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.734 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.734 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.doO 00:14:25.734 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.doO 00:14:25.993 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:25.993 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.921 00:14:25.993 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.993 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.993 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.993 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.921 00:14:25.993 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.921 00:14:26.251 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:26.251 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:26.251 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:26.251 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.251 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:26.251 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:26.818 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:26.818 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.818 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:26.818 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:26.818 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:26.818 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.818 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.818 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.818 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.818 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.818 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.818 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.818 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.077 00:14:27.077 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.077 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.077 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.335 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.335 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.335 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.335 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.335 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.335 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.335 { 00:14:27.335 "cntlid": 1, 00:14:27.335 "qid": 0, 00:14:27.335 "state": "enabled", 00:14:27.335 "thread": "nvmf_tgt_poll_group_000", 00:14:27.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:14:27.335 "listen_address": { 00:14:27.335 "trtype": "TCP", 00:14:27.335 "adrfam": "IPv4", 00:14:27.335 "traddr": "10.0.0.3", 00:14:27.335 "trsvcid": "4420" 00:14:27.335 }, 00:14:27.335 "peer_address": { 00:14:27.335 "trtype": "TCP", 00:14:27.335 "adrfam": "IPv4", 00:14:27.335 "traddr": "10.0.0.1", 00:14:27.335 "trsvcid": "40164" 00:14:27.335 }, 00:14:27.335 "auth": { 00:14:27.335 "state": "completed", 00:14:27.336 "digest": "sha256", 00:14:27.336 "dhgroup": "null" 00:14:27.336 } 00:14:27.336 } 00:14:27.336 ]' 00:14:27.336 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.336 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.336 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.336 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:27.336 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.594 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.594 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.594 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.854 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:14:27.854 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.165 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.166 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.166 00:14:33.166 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.166 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.166 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.425 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.425 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.425 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.425 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.425 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.425 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.425 { 00:14:33.425 "cntlid": 3, 00:14:33.425 "qid": 0, 00:14:33.425 "state": "enabled", 00:14:33.425 "thread": "nvmf_tgt_poll_group_000", 00:14:33.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:14:33.425 "listen_address": { 00:14:33.425 "trtype": "TCP", 00:14:33.425 "adrfam": "IPv4", 00:14:33.425 "traddr": "10.0.0.3", 00:14:33.425 "trsvcid": "4420" 00:14:33.425 }, 00:14:33.425 "peer_address": { 00:14:33.425 "trtype": "TCP", 00:14:33.425 "adrfam": "IPv4", 00:14:33.425 "traddr": "10.0.0.1", 00:14:33.425 "trsvcid": "49882" 00:14:33.425 }, 00:14:33.425 "auth": { 00:14:33.425 "state": "completed", 00:14:33.425 "digest": "sha256", 00:14:33.425 "dhgroup": "null" 00:14:33.425 } 00:14:33.425 } 00:14:33.425 ]' 00:14:33.425 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.425 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.425 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.425 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:33.425 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.425 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.425 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.425 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.702 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:14:33.702 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:14:34.648 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.648 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:34.648 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.648 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.648 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.648 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.648 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:34.648 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:34.908 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:34.908 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.908 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:34.908 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:34.908 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:34.908 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.908 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.908 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.908 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.908 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.908 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.908 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.908 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.167 00:14:35.167 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.167 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.167 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.426 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.426 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.426 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.426 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.427 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.427 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.427 { 00:14:35.427 "cntlid": 5, 00:14:35.427 "qid": 0, 00:14:35.427 "state": "enabled", 00:14:35.427 "thread": "nvmf_tgt_poll_group_000", 00:14:35.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:14:35.427 "listen_address": { 00:14:35.427 "trtype": "TCP", 00:14:35.427 "adrfam": "IPv4", 00:14:35.427 "traddr": "10.0.0.3", 00:14:35.427 "trsvcid": "4420" 00:14:35.427 }, 00:14:35.427 "peer_address": { 00:14:35.427 "trtype": "TCP", 00:14:35.427 "adrfam": "IPv4", 00:14:35.427 "traddr": "10.0.0.1", 00:14:35.427 "trsvcid": "49902" 00:14:35.427 }, 00:14:35.427 "auth": { 00:14:35.427 "state": "completed", 00:14:35.427 "digest": "sha256", 00:14:35.427 "dhgroup": "null" 00:14:35.427 } 00:14:35.427 } 00:14:35.427 ]' 00:14:35.427 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.686 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.686 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.686 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:35.686 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.686 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.686 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.686 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.945 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:14:35.945 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:14:36.881 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.881 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:36.881 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.881 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.881 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.881 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.881 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:36.881 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:37.140 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:37.140 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.140 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:37.140 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:37.140 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:37.140 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.140 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:14:37.140 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.140 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.140 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.140 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:37.140 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:37.140 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:37.404 00:14:37.404 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.404 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.404 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.662 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.662 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.662 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.662 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.662 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.662 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.662 { 00:14:37.662 "cntlid": 7, 00:14:37.662 "qid": 0, 00:14:37.662 "state": "enabled", 00:14:37.662 "thread": "nvmf_tgt_poll_group_000", 00:14:37.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:14:37.662 "listen_address": { 00:14:37.662 "trtype": "TCP", 00:14:37.662 "adrfam": "IPv4", 00:14:37.662 "traddr": "10.0.0.3", 00:14:37.662 "trsvcid": "4420" 00:14:37.662 }, 00:14:37.662 "peer_address": { 00:14:37.662 "trtype": "TCP", 00:14:37.662 "adrfam": "IPv4", 00:14:37.662 "traddr": "10.0.0.1", 00:14:37.662 "trsvcid": "49912" 00:14:37.662 }, 00:14:37.662 "auth": { 00:14:37.662 "state": "completed", 00:14:37.662 "digest": "sha256", 00:14:37.662 "dhgroup": "null" 00:14:37.662 } 00:14:37.662 } 00:14:37.662 ]' 00:14:37.662 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.922 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.922 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.922 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:37.922 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.922 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.922 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.922 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.180 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:14:38.180 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:14:39.113 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.113 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:39.113 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.113 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.113 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.113 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:39.113 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.113 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:39.113 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:39.371 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:39.371 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.371 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:39.371 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:39.371 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:39.371 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.371 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.371 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.371 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.371 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.371 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.371 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.371 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.630 00:14:39.630 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.630 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.630 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.888 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.888 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.888 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.888 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.888 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.888 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.888 { 00:14:39.888 "cntlid": 9, 00:14:39.888 "qid": 0, 00:14:39.888 "state": "enabled", 00:14:39.888 "thread": "nvmf_tgt_poll_group_000", 00:14:39.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:14:39.888 "listen_address": { 00:14:39.888 "trtype": "TCP", 00:14:39.888 "adrfam": "IPv4", 00:14:39.888 "traddr": "10.0.0.3", 00:14:39.888 "trsvcid": "4420" 00:14:39.888 }, 00:14:39.888 "peer_address": { 00:14:39.888 "trtype": "TCP", 00:14:39.888 "adrfam": "IPv4", 00:14:39.888 "traddr": "10.0.0.1", 00:14:39.888 "trsvcid": "51744" 00:14:39.888 }, 00:14:39.888 "auth": { 00:14:39.888 "state": "completed", 00:14:39.888 "digest": "sha256", 00:14:39.888 "dhgroup": "ffdhe2048" 00:14:39.888 } 00:14:39.888 } 00:14:39.888 ]' 00:14:39.888 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.148 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.148 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.148 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:40.148 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.148 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.148 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.148 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.406 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:14:40.406 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:14:41.383 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.383 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:41.383 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.383 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.383 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.383 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.383 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:41.383 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:41.642 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:41.642 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.642 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:41.642 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:41.642 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:41.642 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.642 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.642 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.642 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.642 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.642 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.642 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.643 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.902 00:14:41.902 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.902 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.902 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.162 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.162 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.162 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.162 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.162 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.162 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.162 { 00:14:42.162 "cntlid": 11, 00:14:42.162 "qid": 0, 00:14:42.162 "state": "enabled", 00:14:42.162 "thread": "nvmf_tgt_poll_group_000", 00:14:42.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:14:42.162 "listen_address": { 00:14:42.162 "trtype": "TCP", 00:14:42.162 "adrfam": "IPv4", 00:14:42.162 "traddr": "10.0.0.3", 00:14:42.162 "trsvcid": "4420" 00:14:42.162 }, 00:14:42.162 "peer_address": { 00:14:42.162 "trtype": "TCP", 00:14:42.162 "adrfam": "IPv4", 00:14:42.162 "traddr": "10.0.0.1", 00:14:42.162 "trsvcid": "51778" 00:14:42.162 }, 00:14:42.162 "auth": { 00:14:42.162 "state": "completed", 00:14:42.162 "digest": "sha256", 00:14:42.162 "dhgroup": "ffdhe2048" 00:14:42.162 } 00:14:42.162 } 00:14:42.162 ]' 00:14:42.162 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.162 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.162 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.421 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:42.421 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.421 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.421 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.421 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.679 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:14:42.679 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:14:43.614 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.614 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:43.614 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.614 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.614 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.614 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:43.615 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:43.615 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:43.615 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.615 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:43.615 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:43.615 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:43.615 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.615 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.615 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.615 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.615 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.615 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.615 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.615 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.183 00:14:44.183 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.183 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.183 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.441 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.441 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.441 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.441 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.441 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.441 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.441 { 00:14:44.441 "cntlid": 13, 00:14:44.441 "qid": 0, 00:14:44.441 "state": "enabled", 00:14:44.441 "thread": "nvmf_tgt_poll_group_000", 00:14:44.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:14:44.441 "listen_address": { 00:14:44.441 "trtype": "TCP", 00:14:44.441 "adrfam": "IPv4", 00:14:44.441 "traddr": "10.0.0.3", 00:14:44.441 "trsvcid": "4420" 00:14:44.441 }, 00:14:44.441 "peer_address": { 00:14:44.441 "trtype": "TCP", 00:14:44.441 "adrfam": "IPv4", 00:14:44.441 "traddr": "10.0.0.1", 00:14:44.441 "trsvcid": "51802" 00:14:44.441 }, 00:14:44.441 "auth": { 00:14:44.441 "state": "completed", 00:14:44.441 "digest": "sha256", 00:14:44.441 "dhgroup": "ffdhe2048" 00:14:44.441 } 00:14:44.441 } 00:14:44.441 ]' 00:14:44.441 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.441 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.441 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.441 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:44.441 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.700 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.700 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.700 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.959 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:14:44.959 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:14:45.526 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.526 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:45.526 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.526 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.526 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.526 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.526 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:45.526 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:46.094 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:46.094 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.094 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:46.094 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:46.094 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:46.094 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.094 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:14:46.094 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.094 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.094 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.094 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:46.094 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.094 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.353 00:14:46.353 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.353 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.353 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.611 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.611 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.611 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.611 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.611 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.611 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.611 { 00:14:46.611 "cntlid": 15, 00:14:46.611 "qid": 0, 00:14:46.611 "state": "enabled", 00:14:46.611 "thread": "nvmf_tgt_poll_group_000", 00:14:46.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:14:46.611 "listen_address": { 00:14:46.611 "trtype": "TCP", 00:14:46.611 "adrfam": "IPv4", 00:14:46.611 "traddr": "10.0.0.3", 00:14:46.611 "trsvcid": "4420" 00:14:46.611 }, 00:14:46.611 "peer_address": { 00:14:46.611 "trtype": "TCP", 00:14:46.611 "adrfam": "IPv4", 00:14:46.611 "traddr": "10.0.0.1", 00:14:46.611 "trsvcid": "51840" 00:14:46.611 }, 00:14:46.611 "auth": { 00:14:46.611 "state": "completed", 00:14:46.611 "digest": "sha256", 00:14:46.611 "dhgroup": "ffdhe2048" 00:14:46.611 } 00:14:46.611 } 00:14:46.611 ]' 00:14:46.611 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.611 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.611 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.869 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:46.869 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.869 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.869 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.869 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.128 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:14:47.128 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:14:47.694 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.694 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:47.694 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.694 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.694 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.694 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:47.694 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.694 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:47.694 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:47.952 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:47.952 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.952 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:47.952 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:47.952 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:47.952 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.953 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.953 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.953 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.953 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.953 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.953 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.953 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.532 00:14:48.532 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.532 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.532 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.819 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.819 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.819 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.819 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.819 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.819 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.819 { 00:14:48.819 "cntlid": 17, 00:14:48.819 "qid": 0, 00:14:48.819 "state": "enabled", 00:14:48.819 "thread": "nvmf_tgt_poll_group_000", 00:14:48.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:14:48.819 "listen_address": { 00:14:48.819 "trtype": "TCP", 00:14:48.819 "adrfam": "IPv4", 00:14:48.819 "traddr": "10.0.0.3", 00:14:48.819 "trsvcid": "4420" 00:14:48.819 }, 00:14:48.819 "peer_address": { 00:14:48.819 "trtype": "TCP", 00:14:48.819 "adrfam": "IPv4", 00:14:48.819 "traddr": "10.0.0.1", 00:14:48.819 "trsvcid": "35644" 00:14:48.819 }, 00:14:48.819 "auth": { 00:14:48.819 "state": "completed", 00:14:48.819 "digest": "sha256", 00:14:48.819 "dhgroup": "ffdhe3072" 00:14:48.819 } 00:14:48.819 } 00:14:48.819 ]' 00:14:48.819 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.819 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.819 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.819 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:48.819 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.819 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.819 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.819 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.386 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:14:49.386 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:14:49.953 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.953 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:49.953 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.953 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.953 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.953 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.953 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:49.953 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:50.211 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:50.211 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.211 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:50.211 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:50.211 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:50.211 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.211 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.211 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.211 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.211 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.211 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.211 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.211 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.777 00:14:50.777 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.777 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.777 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.036 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.036 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.036 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.036 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.036 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.036 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.036 { 00:14:51.036 "cntlid": 19, 00:14:51.036 "qid": 0, 00:14:51.036 "state": "enabled", 00:14:51.036 "thread": "nvmf_tgt_poll_group_000", 00:14:51.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:14:51.036 "listen_address": { 00:14:51.036 "trtype": "TCP", 00:14:51.036 "adrfam": "IPv4", 00:14:51.036 "traddr": "10.0.0.3", 00:14:51.036 "trsvcid": "4420" 00:14:51.036 }, 00:14:51.036 "peer_address": { 00:14:51.036 "trtype": "TCP", 00:14:51.036 "adrfam": "IPv4", 00:14:51.036 "traddr": "10.0.0.1", 00:14:51.036 "trsvcid": "35670" 00:14:51.036 }, 00:14:51.036 "auth": { 00:14:51.036 "state": "completed", 00:14:51.036 "digest": "sha256", 00:14:51.036 "dhgroup": "ffdhe3072" 00:14:51.036 } 00:14:51.036 } 00:14:51.036 ]' 00:14:51.036 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.036 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.036 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.036 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:51.036 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.036 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.036 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.036 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.295 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:14:51.295 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:14:52.231 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.231 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:52.231 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.231 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.231 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.231 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.231 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:52.231 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:52.489 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:52.489 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.489 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:52.489 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:52.489 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:52.489 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.489 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.489 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.489 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.489 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.489 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.489 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.489 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.748 00:14:52.748 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.748 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.748 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.315 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.315 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.315 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.315 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.315 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.315 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.315 { 00:14:53.315 "cntlid": 21, 00:14:53.315 "qid": 0, 00:14:53.315 "state": "enabled", 00:14:53.315 "thread": "nvmf_tgt_poll_group_000", 00:14:53.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:14:53.315 "listen_address": { 00:14:53.315 "trtype": "TCP", 00:14:53.315 "adrfam": "IPv4", 00:14:53.315 "traddr": "10.0.0.3", 00:14:53.315 "trsvcid": "4420" 00:14:53.315 }, 00:14:53.315 "peer_address": { 00:14:53.315 "trtype": "TCP", 00:14:53.315 "adrfam": "IPv4", 00:14:53.315 "traddr": "10.0.0.1", 00:14:53.315 "trsvcid": "35694" 00:14:53.315 }, 00:14:53.315 "auth": { 00:14:53.315 "state": "completed", 00:14:53.315 "digest": "sha256", 00:14:53.315 "dhgroup": "ffdhe3072" 00:14:53.315 } 00:14:53.315 } 00:14:53.315 ]' 00:14:53.315 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.315 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.315 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.315 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:53.315 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.315 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.315 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.315 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.574 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:14:53.574 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:14:54.509 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.509 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:54.509 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.509 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.509 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.509 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.509 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:54.509 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:54.768 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:54.768 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.768 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:54.768 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:54.768 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:54.768 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.768 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:14:54.768 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.768 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.768 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.768 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:54.768 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:54.768 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:55.026 00:14:55.285 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.285 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.285 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.543 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.543 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.543 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.543 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.543 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.543 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.543 { 00:14:55.543 "cntlid": 23, 00:14:55.543 "qid": 0, 00:14:55.543 "state": "enabled", 00:14:55.543 "thread": "nvmf_tgt_poll_group_000", 00:14:55.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:14:55.543 "listen_address": { 00:14:55.543 "trtype": "TCP", 00:14:55.543 "adrfam": "IPv4", 00:14:55.543 "traddr": "10.0.0.3", 00:14:55.543 "trsvcid": "4420" 00:14:55.543 }, 00:14:55.543 "peer_address": { 00:14:55.543 "trtype": "TCP", 00:14:55.543 "adrfam": "IPv4", 00:14:55.543 "traddr": "10.0.0.1", 00:14:55.543 "trsvcid": "35726" 00:14:55.543 }, 00:14:55.543 "auth": { 00:14:55.543 "state": "completed", 00:14:55.543 "digest": "sha256", 00:14:55.543 "dhgroup": "ffdhe3072" 00:14:55.543 } 00:14:55.543 } 00:14:55.543 ]' 00:14:55.543 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.543 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.543 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.543 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:55.543 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.950 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.950 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.950 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.209 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:14:56.209 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:14:56.775 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.776 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:56.776 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.776 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.776 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.776 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:56.776 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.776 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:56.776 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:57.342 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:57.342 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.342 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:57.342 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:57.342 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:57.342 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.342 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.342 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.342 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.342 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.342 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.342 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.342 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.600 00:14:57.600 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.600 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.600 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.858 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.858 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.858 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.858 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.858 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.858 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.858 { 00:14:57.858 "cntlid": 25, 00:14:57.858 "qid": 0, 00:14:57.858 "state": "enabled", 00:14:57.858 "thread": "nvmf_tgt_poll_group_000", 00:14:57.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:14:57.858 "listen_address": { 00:14:57.858 "trtype": "TCP", 00:14:57.858 "adrfam": "IPv4", 00:14:57.858 "traddr": "10.0.0.3", 00:14:57.858 "trsvcid": "4420" 00:14:57.858 }, 00:14:57.858 "peer_address": { 00:14:57.858 "trtype": "TCP", 00:14:57.858 "adrfam": "IPv4", 00:14:57.858 "traddr": "10.0.0.1", 00:14:57.859 "trsvcid": "35746" 00:14:57.859 }, 00:14:57.859 "auth": { 00:14:57.859 "state": "completed", 00:14:57.859 "digest": "sha256", 00:14:57.859 "dhgroup": "ffdhe4096" 00:14:57.859 } 00:14:57.859 } 00:14:57.859 ]' 00:14:57.859 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.117 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.117 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.117 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:58.117 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.117 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.117 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.117 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.375 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:14:58.376 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:14:59.309 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.309 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:14:59.309 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.309 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.309 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.309 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.309 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:59.309 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:59.569 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:59.569 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.569 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:59.569 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:59.569 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:59.569 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.569 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.569 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.569 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.569 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.569 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.569 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.569 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.827 00:14:59.827 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.827 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.827 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.394 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.394 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.394 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.394 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.394 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.394 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.394 { 00:15:00.394 "cntlid": 27, 00:15:00.394 "qid": 0, 00:15:00.394 "state": "enabled", 00:15:00.394 "thread": "nvmf_tgt_poll_group_000", 00:15:00.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:00.394 "listen_address": { 00:15:00.394 "trtype": "TCP", 00:15:00.394 "adrfam": "IPv4", 00:15:00.394 "traddr": "10.0.0.3", 00:15:00.394 "trsvcid": "4420" 00:15:00.394 }, 00:15:00.394 "peer_address": { 00:15:00.394 "trtype": "TCP", 00:15:00.394 "adrfam": "IPv4", 00:15:00.394 "traddr": "10.0.0.1", 00:15:00.394 "trsvcid": "40570" 00:15:00.394 }, 00:15:00.394 "auth": { 00:15:00.394 "state": "completed", 00:15:00.394 "digest": "sha256", 00:15:00.394 "dhgroup": "ffdhe4096" 00:15:00.394 } 00:15:00.394 } 00:15:00.394 ]' 00:15:00.394 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.395 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.395 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.395 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:00.395 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.395 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.395 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.395 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.653 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:15:00.653 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:15:01.588 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.588 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:01.588 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.588 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.588 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.588 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.588 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:01.588 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:01.847 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:01.847 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.847 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.847 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:01.847 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:01.847 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.847 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.847 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.847 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.847 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.847 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.847 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.847 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.414 00:15:02.414 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.414 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.414 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.673 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.673 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.673 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.673 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.673 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.673 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.673 { 00:15:02.673 "cntlid": 29, 00:15:02.673 "qid": 0, 00:15:02.673 "state": "enabled", 00:15:02.673 "thread": "nvmf_tgt_poll_group_000", 00:15:02.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:02.673 "listen_address": { 00:15:02.673 "trtype": "TCP", 00:15:02.673 "adrfam": "IPv4", 00:15:02.673 "traddr": "10.0.0.3", 00:15:02.673 "trsvcid": "4420" 00:15:02.673 }, 00:15:02.673 "peer_address": { 00:15:02.673 "trtype": "TCP", 00:15:02.673 "adrfam": "IPv4", 00:15:02.673 "traddr": "10.0.0.1", 00:15:02.673 "trsvcid": "40600" 00:15:02.673 }, 00:15:02.673 "auth": { 00:15:02.673 "state": "completed", 00:15:02.673 "digest": "sha256", 00:15:02.673 "dhgroup": "ffdhe4096" 00:15:02.673 } 00:15:02.673 } 00:15:02.673 ]' 00:15:02.673 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.673 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.673 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.673 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:02.673 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.673 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.673 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.673 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.239 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:15:03.239 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:15:03.806 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.806 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:03.806 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.806 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.806 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.806 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.806 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:03.806 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:04.065 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:04.065 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.065 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.065 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:04.065 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:04.065 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.065 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:15:04.065 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.065 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.065 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.065 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:04.065 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:04.065 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:04.323 00:15:04.582 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.582 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.582 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.841 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.841 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.841 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.841 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.841 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.841 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.841 { 00:15:04.841 "cntlid": 31, 00:15:04.841 "qid": 0, 00:15:04.841 "state": "enabled", 00:15:04.841 "thread": "nvmf_tgt_poll_group_000", 00:15:04.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:04.841 "listen_address": { 00:15:04.841 "trtype": "TCP", 00:15:04.841 "adrfam": "IPv4", 00:15:04.841 "traddr": "10.0.0.3", 00:15:04.841 "trsvcid": "4420" 00:15:04.841 }, 00:15:04.841 "peer_address": { 00:15:04.841 "trtype": "TCP", 00:15:04.841 "adrfam": "IPv4", 00:15:04.841 "traddr": "10.0.0.1", 00:15:04.841 "trsvcid": "40640" 00:15:04.841 }, 00:15:04.841 "auth": { 00:15:04.841 "state": "completed", 00:15:04.841 "digest": "sha256", 00:15:04.841 "dhgroup": "ffdhe4096" 00:15:04.841 } 00:15:04.841 } 00:15:04.841 ]' 00:15:04.841 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.841 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.841 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.841 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:04.841 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.100 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.100 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.100 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.359 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:15:05.359 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:15:06.320 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.320 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:06.320 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.320 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.320 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.320 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.320 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.320 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:06.320 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:06.320 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:06.320 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.320 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:06.320 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:06.320 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:06.320 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.320 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.320 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.320 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.320 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.320 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.320 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.320 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.887 00:15:06.887 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.887 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.887 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.454 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.454 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.454 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.454 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.454 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.454 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.454 { 00:15:07.454 "cntlid": 33, 00:15:07.454 "qid": 0, 00:15:07.454 "state": "enabled", 00:15:07.454 "thread": "nvmf_tgt_poll_group_000", 00:15:07.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:07.454 "listen_address": { 00:15:07.454 "trtype": "TCP", 00:15:07.454 "adrfam": "IPv4", 00:15:07.454 "traddr": "10.0.0.3", 00:15:07.454 "trsvcid": "4420" 00:15:07.454 }, 00:15:07.454 "peer_address": { 00:15:07.454 "trtype": "TCP", 00:15:07.454 "adrfam": "IPv4", 00:15:07.454 "traddr": "10.0.0.1", 00:15:07.454 "trsvcid": "40672" 00:15:07.454 }, 00:15:07.454 "auth": { 00:15:07.454 "state": "completed", 00:15:07.454 "digest": "sha256", 00:15:07.454 "dhgroup": "ffdhe6144" 00:15:07.454 } 00:15:07.454 } 00:15:07.454 ]' 00:15:07.454 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.454 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.454 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.454 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:07.454 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.454 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.454 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.454 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.713 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:15:07.713 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:15:08.648 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.648 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:08.648 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.648 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.648 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.648 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.648 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:08.648 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:08.915 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:08.915 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.915 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.915 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:08.915 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:08.915 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.915 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.915 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.915 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.915 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.915 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.915 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.915 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.482 00:15:09.482 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.482 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.482 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.740 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.740 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.740 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.740 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.740 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.740 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.740 { 00:15:09.740 "cntlid": 35, 00:15:09.740 "qid": 0, 00:15:09.740 "state": "enabled", 00:15:09.740 "thread": "nvmf_tgt_poll_group_000", 00:15:09.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:09.740 "listen_address": { 00:15:09.740 "trtype": "TCP", 00:15:09.740 "adrfam": "IPv4", 00:15:09.740 "traddr": "10.0.0.3", 00:15:09.740 "trsvcid": "4420" 00:15:09.740 }, 00:15:09.740 "peer_address": { 00:15:09.740 "trtype": "TCP", 00:15:09.740 "adrfam": "IPv4", 00:15:09.740 "traddr": "10.0.0.1", 00:15:09.740 "trsvcid": "58502" 00:15:09.740 }, 00:15:09.740 "auth": { 00:15:09.740 "state": "completed", 00:15:09.740 "digest": "sha256", 00:15:09.740 "dhgroup": "ffdhe6144" 00:15:09.740 } 00:15:09.740 } 00:15:09.740 ]' 00:15:09.740 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.740 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.740 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.740 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:09.740 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.740 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.740 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.740 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.372 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:15:10.372 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:15:10.937 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.937 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:10.937 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.937 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.937 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.937 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.937 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:10.937 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:11.195 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:11.195 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.195 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:11.195 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:11.195 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:11.195 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.195 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.195 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.195 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.195 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.195 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.195 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.195 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.762 00:15:11.762 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.762 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.762 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.020 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.021 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.021 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.021 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.021 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.021 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.021 { 00:15:12.021 "cntlid": 37, 00:15:12.021 "qid": 0, 00:15:12.021 "state": "enabled", 00:15:12.021 "thread": "nvmf_tgt_poll_group_000", 00:15:12.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:12.021 "listen_address": { 00:15:12.021 "trtype": "TCP", 00:15:12.021 "adrfam": "IPv4", 00:15:12.021 "traddr": "10.0.0.3", 00:15:12.021 "trsvcid": "4420" 00:15:12.021 }, 00:15:12.021 "peer_address": { 00:15:12.021 "trtype": "TCP", 00:15:12.021 "adrfam": "IPv4", 00:15:12.021 "traddr": "10.0.0.1", 00:15:12.021 "trsvcid": "58528" 00:15:12.021 }, 00:15:12.021 "auth": { 00:15:12.021 "state": "completed", 00:15:12.021 "digest": "sha256", 00:15:12.021 "dhgroup": "ffdhe6144" 00:15:12.021 } 00:15:12.021 } 00:15:12.021 ]' 00:15:12.021 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.021 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.021 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.279 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:12.279 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.279 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.279 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.279 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.538 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:15:12.538 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:15:13.105 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.105 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:13.105 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.105 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.105 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.105 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.105 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:13.105 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:13.363 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:13.363 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.363 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.363 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:13.363 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:13.363 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.363 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:15:13.363 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.363 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.363 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.363 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:13.363 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.363 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.930 00:15:13.930 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.930 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.930 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.188 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.188 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.188 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.188 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.188 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.188 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.188 { 00:15:14.188 "cntlid": 39, 00:15:14.188 "qid": 0, 00:15:14.188 "state": "enabled", 00:15:14.188 "thread": "nvmf_tgt_poll_group_000", 00:15:14.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:14.188 "listen_address": { 00:15:14.188 "trtype": "TCP", 00:15:14.188 "adrfam": "IPv4", 00:15:14.188 "traddr": "10.0.0.3", 00:15:14.188 "trsvcid": "4420" 00:15:14.188 }, 00:15:14.188 "peer_address": { 00:15:14.188 "trtype": "TCP", 00:15:14.188 "adrfam": "IPv4", 00:15:14.188 "traddr": "10.0.0.1", 00:15:14.188 "trsvcid": "58554" 00:15:14.189 }, 00:15:14.189 "auth": { 00:15:14.189 "state": "completed", 00:15:14.189 "digest": "sha256", 00:15:14.189 "dhgroup": "ffdhe6144" 00:15:14.189 } 00:15:14.189 } 00:15:14.189 ]' 00:15:14.189 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.189 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.189 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.447 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:14.447 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.447 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.447 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.447 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.705 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:15:14.705 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.639 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.575 00:15:16.575 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.575 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.575 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.834 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.834 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.834 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.834 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.834 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.834 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.834 { 00:15:16.834 "cntlid": 41, 00:15:16.834 "qid": 0, 00:15:16.834 "state": "enabled", 00:15:16.834 "thread": "nvmf_tgt_poll_group_000", 00:15:16.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:16.834 "listen_address": { 00:15:16.834 "trtype": "TCP", 00:15:16.834 "adrfam": "IPv4", 00:15:16.834 "traddr": "10.0.0.3", 00:15:16.834 "trsvcid": "4420" 00:15:16.834 }, 00:15:16.834 "peer_address": { 00:15:16.834 "trtype": "TCP", 00:15:16.834 "adrfam": "IPv4", 00:15:16.834 "traddr": "10.0.0.1", 00:15:16.834 "trsvcid": "58590" 00:15:16.834 }, 00:15:16.834 "auth": { 00:15:16.834 "state": "completed", 00:15:16.834 "digest": "sha256", 00:15:16.834 "dhgroup": "ffdhe8192" 00:15:16.834 } 00:15:16.834 } 00:15:16.834 ]' 00:15:16.834 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.834 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.834 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.835 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:16.835 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.093 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.093 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.093 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.351 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:15:17.351 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:15:17.918 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.918 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:17.918 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.918 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.918 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.918 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.918 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:17.918 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:18.486 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:18.486 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.486 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.486 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:18.486 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:18.486 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.486 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.486 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.486 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.486 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.486 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.486 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.486 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.053 00:15:19.053 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.053 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.053 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.311 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.311 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.311 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.311 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.311 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.311 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.311 { 00:15:19.311 "cntlid": 43, 00:15:19.311 "qid": 0, 00:15:19.311 "state": "enabled", 00:15:19.311 "thread": "nvmf_tgt_poll_group_000", 00:15:19.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:19.311 "listen_address": { 00:15:19.311 "trtype": "TCP", 00:15:19.311 "adrfam": "IPv4", 00:15:19.311 "traddr": "10.0.0.3", 00:15:19.311 "trsvcid": "4420" 00:15:19.311 }, 00:15:19.311 "peer_address": { 00:15:19.311 "trtype": "TCP", 00:15:19.311 "adrfam": "IPv4", 00:15:19.311 "traddr": "10.0.0.1", 00:15:19.311 "trsvcid": "54444" 00:15:19.311 }, 00:15:19.311 "auth": { 00:15:19.311 "state": "completed", 00:15:19.311 "digest": "sha256", 00:15:19.311 "dhgroup": "ffdhe8192" 00:15:19.311 } 00:15:19.311 } 00:15:19.311 ]' 00:15:19.311 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.311 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.311 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.569 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:19.569 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.569 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.569 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.570 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.828 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:15:19.828 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.775 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.711 00:15:21.711 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.711 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.711 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.969 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.969 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.969 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.969 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.969 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.969 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.969 { 00:15:21.969 "cntlid": 45, 00:15:21.969 "qid": 0, 00:15:21.969 "state": "enabled", 00:15:21.969 "thread": "nvmf_tgt_poll_group_000", 00:15:21.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:21.969 "listen_address": { 00:15:21.969 "trtype": "TCP", 00:15:21.969 "adrfam": "IPv4", 00:15:21.969 "traddr": "10.0.0.3", 00:15:21.969 "trsvcid": "4420" 00:15:21.969 }, 00:15:21.969 "peer_address": { 00:15:21.969 "trtype": "TCP", 00:15:21.969 "adrfam": "IPv4", 00:15:21.969 "traddr": "10.0.0.1", 00:15:21.969 "trsvcid": "54472" 00:15:21.969 }, 00:15:21.969 "auth": { 00:15:21.969 "state": "completed", 00:15:21.969 "digest": "sha256", 00:15:21.969 "dhgroup": "ffdhe8192" 00:15:21.969 } 00:15:21.969 } 00:15:21.969 ]' 00:15:21.969 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.969 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.969 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.969 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:21.969 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.969 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.969 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.969 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.228 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:15:22.228 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:15:23.164 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.164 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:23.164 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.164 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.164 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.164 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.164 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.164 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.422 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:23.422 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.422 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.422 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:23.422 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:23.422 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.422 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:15:23.422 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.422 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.422 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.422 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:23.422 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:23.422 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:23.989 00:15:23.989 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.989 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.989 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.247 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.247 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.247 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.247 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.247 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.247 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.247 { 00:15:24.247 "cntlid": 47, 00:15:24.247 "qid": 0, 00:15:24.247 "state": "enabled", 00:15:24.247 "thread": "nvmf_tgt_poll_group_000", 00:15:24.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:24.247 "listen_address": { 00:15:24.247 "trtype": "TCP", 00:15:24.247 "adrfam": "IPv4", 00:15:24.247 "traddr": "10.0.0.3", 00:15:24.247 "trsvcid": "4420" 00:15:24.247 }, 00:15:24.247 "peer_address": { 00:15:24.247 "trtype": "TCP", 00:15:24.247 "adrfam": "IPv4", 00:15:24.247 "traddr": "10.0.0.1", 00:15:24.247 "trsvcid": "54498" 00:15:24.247 }, 00:15:24.247 "auth": { 00:15:24.247 "state": "completed", 00:15:24.247 "digest": "sha256", 00:15:24.247 "dhgroup": "ffdhe8192" 00:15:24.247 } 00:15:24.247 } 00:15:24.247 ]' 00:15:24.247 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.506 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.506 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.506 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.506 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.506 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.506 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.506 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.764 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:15:24.765 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:15:25.699 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.699 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:25.699 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.699 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.699 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.699 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:25.699 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.699 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.699 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:25.699 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:25.957 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:25.957 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.957 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:25.957 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:25.957 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:25.957 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.957 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.957 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.957 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.957 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.957 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.957 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.958 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.216 00:15:26.216 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.216 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.216 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.475 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.475 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.475 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.475 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.475 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.475 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.475 { 00:15:26.475 "cntlid": 49, 00:15:26.475 "qid": 0, 00:15:26.475 "state": "enabled", 00:15:26.475 "thread": "nvmf_tgt_poll_group_000", 00:15:26.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:26.475 "listen_address": { 00:15:26.475 "trtype": "TCP", 00:15:26.475 "adrfam": "IPv4", 00:15:26.475 "traddr": "10.0.0.3", 00:15:26.475 "trsvcid": "4420" 00:15:26.475 }, 00:15:26.475 "peer_address": { 00:15:26.475 "trtype": "TCP", 00:15:26.475 "adrfam": "IPv4", 00:15:26.475 "traddr": "10.0.0.1", 00:15:26.475 "trsvcid": "54524" 00:15:26.475 }, 00:15:26.475 "auth": { 00:15:26.475 "state": "completed", 00:15:26.475 "digest": "sha384", 00:15:26.475 "dhgroup": "null" 00:15:26.475 } 00:15:26.475 } 00:15:26.475 ]' 00:15:26.475 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.475 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.475 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.475 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:26.475 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.733 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.733 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.733 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.992 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:15:26.992 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:15:27.559 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.559 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:27.559 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.559 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.559 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.559 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.559 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:27.559 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:27.818 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:27.818 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.818 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:27.818 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:27.818 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:27.818 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.818 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.818 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.818 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.818 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.818 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.818 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.818 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.077 00:15:28.335 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.335 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.335 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.592 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.592 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.592 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.592 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.592 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.592 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.592 { 00:15:28.592 "cntlid": 51, 00:15:28.592 "qid": 0, 00:15:28.592 "state": "enabled", 00:15:28.592 "thread": "nvmf_tgt_poll_group_000", 00:15:28.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:28.592 "listen_address": { 00:15:28.592 "trtype": "TCP", 00:15:28.592 "adrfam": "IPv4", 00:15:28.592 "traddr": "10.0.0.3", 00:15:28.592 "trsvcid": "4420" 00:15:28.592 }, 00:15:28.592 "peer_address": { 00:15:28.592 "trtype": "TCP", 00:15:28.592 "adrfam": "IPv4", 00:15:28.592 "traddr": "10.0.0.1", 00:15:28.592 "trsvcid": "54542" 00:15:28.592 }, 00:15:28.592 "auth": { 00:15:28.592 "state": "completed", 00:15:28.592 "digest": "sha384", 00:15:28.592 "dhgroup": "null" 00:15:28.592 } 00:15:28.592 } 00:15:28.592 ]' 00:15:28.592 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.592 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.592 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.592 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:28.592 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.592 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.592 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.592 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.158 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:15:29.158 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:15:30.091 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.091 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:30.091 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.091 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.091 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.091 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.091 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:30.091 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:30.091 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:30.091 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.091 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:30.091 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:30.091 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:30.091 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.092 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.092 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.092 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.092 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.092 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.092 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.092 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.658 00:15:30.658 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.658 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.658 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.915 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.915 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.915 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.915 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.915 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.915 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.915 { 00:15:30.915 "cntlid": 53, 00:15:30.915 "qid": 0, 00:15:30.915 "state": "enabled", 00:15:30.915 "thread": "nvmf_tgt_poll_group_000", 00:15:30.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:30.915 "listen_address": { 00:15:30.915 "trtype": "TCP", 00:15:30.915 "adrfam": "IPv4", 00:15:30.915 "traddr": "10.0.0.3", 00:15:30.915 "trsvcid": "4420" 00:15:30.915 }, 00:15:30.915 "peer_address": { 00:15:30.915 "trtype": "TCP", 00:15:30.915 "adrfam": "IPv4", 00:15:30.915 "traddr": "10.0.0.1", 00:15:30.915 "trsvcid": "48896" 00:15:30.915 }, 00:15:30.915 "auth": { 00:15:30.915 "state": "completed", 00:15:30.915 "digest": "sha384", 00:15:30.915 "dhgroup": "null" 00:15:30.915 } 00:15:30.915 } 00:15:30.915 ]' 00:15:30.915 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.915 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.915 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.915 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:30.915 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.915 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.915 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.915 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.481 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:15:31.481 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:15:32.049 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.049 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:32.049 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.049 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.049 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.049 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.049 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:32.049 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:32.307 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:32.307 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.307 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:32.307 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:32.307 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:32.307 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.307 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:15:32.307 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.307 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.307 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.307 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:32.307 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.307 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.874 00:15:32.874 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.874 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.874 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.133 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.133 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.133 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.133 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.133 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.133 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.133 { 00:15:33.133 "cntlid": 55, 00:15:33.133 "qid": 0, 00:15:33.133 "state": "enabled", 00:15:33.133 "thread": "nvmf_tgt_poll_group_000", 00:15:33.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:33.133 "listen_address": { 00:15:33.133 "trtype": "TCP", 00:15:33.133 "adrfam": "IPv4", 00:15:33.133 "traddr": "10.0.0.3", 00:15:33.133 "trsvcid": "4420" 00:15:33.133 }, 00:15:33.133 "peer_address": { 00:15:33.133 "trtype": "TCP", 00:15:33.133 "adrfam": "IPv4", 00:15:33.133 "traddr": "10.0.0.1", 00:15:33.133 "trsvcid": "48910" 00:15:33.133 }, 00:15:33.133 "auth": { 00:15:33.133 "state": "completed", 00:15:33.133 "digest": "sha384", 00:15:33.133 "dhgroup": "null" 00:15:33.133 } 00:15:33.133 } 00:15:33.133 ]' 00:15:33.133 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.133 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.133 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.133 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:33.133 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.391 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.391 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.391 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.650 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:15:33.650 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:15:34.216 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.216 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:34.216 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.216 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.216 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.216 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.216 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.216 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:34.216 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:34.781 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:34.781 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.781 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:34.781 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:34.781 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:34.781 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.781 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.782 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.782 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.782 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.782 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.782 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.782 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.040 00:15:35.040 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.040 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.040 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.299 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.299 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.299 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.299 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.299 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.299 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.299 { 00:15:35.299 "cntlid": 57, 00:15:35.299 "qid": 0, 00:15:35.299 "state": "enabled", 00:15:35.299 "thread": "nvmf_tgt_poll_group_000", 00:15:35.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:35.299 "listen_address": { 00:15:35.299 "trtype": "TCP", 00:15:35.299 "adrfam": "IPv4", 00:15:35.299 "traddr": "10.0.0.3", 00:15:35.299 "trsvcid": "4420" 00:15:35.299 }, 00:15:35.299 "peer_address": { 00:15:35.299 "trtype": "TCP", 00:15:35.300 "adrfam": "IPv4", 00:15:35.300 "traddr": "10.0.0.1", 00:15:35.300 "trsvcid": "48934" 00:15:35.300 }, 00:15:35.300 "auth": { 00:15:35.300 "state": "completed", 00:15:35.300 "digest": "sha384", 00:15:35.300 "dhgroup": "ffdhe2048" 00:15:35.300 } 00:15:35.300 } 00:15:35.300 ]' 00:15:35.300 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.300 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.300 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.300 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:35.300 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.558 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.558 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.558 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.817 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:15:35.817 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:15:36.383 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.383 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:36.383 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.383 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.383 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.383 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.383 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:36.383 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:36.642 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:36.642 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.642 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:36.642 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:36.642 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:36.642 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.642 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.642 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.642 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.642 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.642 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.642 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.642 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.208 00:15:37.208 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.208 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.208 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.467 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.467 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.467 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.468 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.468 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.468 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.468 { 00:15:37.468 "cntlid": 59, 00:15:37.468 "qid": 0, 00:15:37.468 "state": "enabled", 00:15:37.468 "thread": "nvmf_tgt_poll_group_000", 00:15:37.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:37.468 "listen_address": { 00:15:37.468 "trtype": "TCP", 00:15:37.468 "adrfam": "IPv4", 00:15:37.468 "traddr": "10.0.0.3", 00:15:37.468 "trsvcid": "4420" 00:15:37.468 }, 00:15:37.468 "peer_address": { 00:15:37.468 "trtype": "TCP", 00:15:37.468 "adrfam": "IPv4", 00:15:37.468 "traddr": "10.0.0.1", 00:15:37.468 "trsvcid": "48960" 00:15:37.468 }, 00:15:37.468 "auth": { 00:15:37.468 "state": "completed", 00:15:37.468 "digest": "sha384", 00:15:37.468 "dhgroup": "ffdhe2048" 00:15:37.468 } 00:15:37.468 } 00:15:37.468 ]' 00:15:37.468 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.468 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.468 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.468 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.468 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.726 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.726 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.726 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.984 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:15:37.984 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:15:38.919 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.919 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:38.919 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.919 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.919 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.919 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.919 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.919 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:39.177 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:39.177 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.177 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:39.177 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:39.177 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:39.177 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.177 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.177 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.177 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.177 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.177 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.177 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.177 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.435 00:15:39.435 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.435 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.435 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.693 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.693 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.693 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.693 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.951 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.951 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.951 { 00:15:39.951 "cntlid": 61, 00:15:39.951 "qid": 0, 00:15:39.951 "state": "enabled", 00:15:39.951 "thread": "nvmf_tgt_poll_group_000", 00:15:39.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:39.951 "listen_address": { 00:15:39.951 "trtype": "TCP", 00:15:39.951 "adrfam": "IPv4", 00:15:39.951 "traddr": "10.0.0.3", 00:15:39.952 "trsvcid": "4420" 00:15:39.952 }, 00:15:39.952 "peer_address": { 00:15:39.952 "trtype": "TCP", 00:15:39.952 "adrfam": "IPv4", 00:15:39.952 "traddr": "10.0.0.1", 00:15:39.952 "trsvcid": "33680" 00:15:39.952 }, 00:15:39.952 "auth": { 00:15:39.952 "state": "completed", 00:15:39.952 "digest": "sha384", 00:15:39.952 "dhgroup": "ffdhe2048" 00:15:39.952 } 00:15:39.952 } 00:15:39.952 ]' 00:15:39.952 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.952 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.952 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.952 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.952 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.952 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.952 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.952 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.209 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:15:40.209 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:15:41.144 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.144 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:41.144 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.144 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.144 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.144 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.144 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:41.144 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:41.401 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:41.401 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.401 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:41.401 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:41.401 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:41.402 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.402 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:15:41.402 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.402 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.402 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.402 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:41.402 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.402 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.967 00:15:41.967 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.967 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.967 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.226 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.226 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.226 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.226 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.226 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.226 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.226 { 00:15:42.226 "cntlid": 63, 00:15:42.226 "qid": 0, 00:15:42.226 "state": "enabled", 00:15:42.226 "thread": "nvmf_tgt_poll_group_000", 00:15:42.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:42.226 "listen_address": { 00:15:42.226 "trtype": "TCP", 00:15:42.226 "adrfam": "IPv4", 00:15:42.226 "traddr": "10.0.0.3", 00:15:42.226 "trsvcid": "4420" 00:15:42.226 }, 00:15:42.226 "peer_address": { 00:15:42.226 "trtype": "TCP", 00:15:42.226 "adrfam": "IPv4", 00:15:42.226 "traddr": "10.0.0.1", 00:15:42.226 "trsvcid": "33704" 00:15:42.226 }, 00:15:42.226 "auth": { 00:15:42.226 "state": "completed", 00:15:42.226 "digest": "sha384", 00:15:42.226 "dhgroup": "ffdhe2048" 00:15:42.226 } 00:15:42.226 } 00:15:42.226 ]' 00:15:42.226 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.226 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.226 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.226 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:42.226 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.226 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.226 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.226 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.793 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:15:42.793 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:15:43.359 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.359 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:43.359 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.359 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.359 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.359 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.359 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.359 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.359 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.618 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:43.618 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.618 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.618 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:43.618 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:43.618 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.618 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.618 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.618 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.618 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.618 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.618 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.618 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.184 00:15:44.184 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.184 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.184 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.442 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.442 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.442 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.442 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.442 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.442 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.442 { 00:15:44.442 "cntlid": 65, 00:15:44.442 "qid": 0, 00:15:44.442 "state": "enabled", 00:15:44.442 "thread": "nvmf_tgt_poll_group_000", 00:15:44.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:44.442 "listen_address": { 00:15:44.442 "trtype": "TCP", 00:15:44.442 "adrfam": "IPv4", 00:15:44.442 "traddr": "10.0.0.3", 00:15:44.442 "trsvcid": "4420" 00:15:44.442 }, 00:15:44.442 "peer_address": { 00:15:44.442 "trtype": "TCP", 00:15:44.442 "adrfam": "IPv4", 00:15:44.442 "traddr": "10.0.0.1", 00:15:44.442 "trsvcid": "33722" 00:15:44.442 }, 00:15:44.442 "auth": { 00:15:44.442 "state": "completed", 00:15:44.442 "digest": "sha384", 00:15:44.442 "dhgroup": "ffdhe3072" 00:15:44.442 } 00:15:44.442 } 00:15:44.442 ]' 00:15:44.442 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.442 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.442 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.442 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:44.701 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.701 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.701 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.701 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.959 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:15:44.959 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:15:45.574 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.574 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:45.574 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.574 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.877 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.878 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.444 00:15:46.444 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.444 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.444 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.703 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.703 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.703 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.703 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.703 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.703 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.703 { 00:15:46.703 "cntlid": 67, 00:15:46.703 "qid": 0, 00:15:46.703 "state": "enabled", 00:15:46.703 "thread": "nvmf_tgt_poll_group_000", 00:15:46.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:46.703 "listen_address": { 00:15:46.703 "trtype": "TCP", 00:15:46.703 "adrfam": "IPv4", 00:15:46.703 "traddr": "10.0.0.3", 00:15:46.703 "trsvcid": "4420" 00:15:46.703 }, 00:15:46.703 "peer_address": { 00:15:46.703 "trtype": "TCP", 00:15:46.703 "adrfam": "IPv4", 00:15:46.703 "traddr": "10.0.0.1", 00:15:46.703 "trsvcid": "33752" 00:15:46.703 }, 00:15:46.703 "auth": { 00:15:46.703 "state": "completed", 00:15:46.703 "digest": "sha384", 00:15:46.703 "dhgroup": "ffdhe3072" 00:15:46.703 } 00:15:46.703 } 00:15:46.703 ]' 00:15:46.703 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.703 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.703 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.703 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.703 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.962 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.962 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.962 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.220 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:15:47.220 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:15:47.787 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.787 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:47.787 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.787 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.787 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.787 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.787 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:47.787 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:48.353 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:48.353 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.353 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.353 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:48.353 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:48.353 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.353 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.353 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.353 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.353 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.353 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.353 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.353 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.612 00:15:48.612 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.612 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.612 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.870 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.870 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.870 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.870 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.870 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.870 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.870 { 00:15:48.870 "cntlid": 69, 00:15:48.870 "qid": 0, 00:15:48.870 "state": "enabled", 00:15:48.870 "thread": "nvmf_tgt_poll_group_000", 00:15:48.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:48.870 "listen_address": { 00:15:48.870 "trtype": "TCP", 00:15:48.870 "adrfam": "IPv4", 00:15:48.870 "traddr": "10.0.0.3", 00:15:48.870 "trsvcid": "4420" 00:15:48.870 }, 00:15:48.870 "peer_address": { 00:15:48.870 "trtype": "TCP", 00:15:48.870 "adrfam": "IPv4", 00:15:48.870 "traddr": "10.0.0.1", 00:15:48.870 "trsvcid": "57258" 00:15:48.870 }, 00:15:48.870 "auth": { 00:15:48.870 "state": "completed", 00:15:48.870 "digest": "sha384", 00:15:48.870 "dhgroup": "ffdhe3072" 00:15:48.870 } 00:15:48.870 } 00:15:48.870 ]' 00:15:48.870 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.870 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.870 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.133 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.133 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.133 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.133 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.133 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.392 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:15:49.392 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:15:50.326 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.326 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:50.326 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.326 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.326 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.326 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.326 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:50.326 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:50.585 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:50.585 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.585 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.585 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:50.585 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:50.585 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.585 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:15:50.585 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.585 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.585 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.585 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:50.585 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.585 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.843 00:15:50.843 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.843 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.843 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.410 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.410 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.410 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.410 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.410 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.410 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.410 { 00:15:51.410 "cntlid": 71, 00:15:51.410 "qid": 0, 00:15:51.410 "state": "enabled", 00:15:51.410 "thread": "nvmf_tgt_poll_group_000", 00:15:51.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:51.410 "listen_address": { 00:15:51.410 "trtype": "TCP", 00:15:51.410 "adrfam": "IPv4", 00:15:51.410 "traddr": "10.0.0.3", 00:15:51.410 "trsvcid": "4420" 00:15:51.410 }, 00:15:51.410 "peer_address": { 00:15:51.410 "trtype": "TCP", 00:15:51.410 "adrfam": "IPv4", 00:15:51.410 "traddr": "10.0.0.1", 00:15:51.410 "trsvcid": "57280" 00:15:51.410 }, 00:15:51.410 "auth": { 00:15:51.410 "state": "completed", 00:15:51.410 "digest": "sha384", 00:15:51.410 "dhgroup": "ffdhe3072" 00:15:51.410 } 00:15:51.410 } 00:15:51.410 ]' 00:15:51.410 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.410 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.410 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.410 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:51.410 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.410 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.410 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.410 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.668 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:15:51.668 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:15:52.602 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.602 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:52.602 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.602 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.602 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.602 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.602 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.602 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:52.602 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:52.861 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:52.861 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.861 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:52.861 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:52.861 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:52.861 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.861 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.861 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.861 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.861 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.861 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.861 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.861 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.428 00:15:53.428 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.428 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.428 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.687 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.687 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.687 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.687 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.687 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.687 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.687 { 00:15:53.687 "cntlid": 73, 00:15:53.687 "qid": 0, 00:15:53.687 "state": "enabled", 00:15:53.687 "thread": "nvmf_tgt_poll_group_000", 00:15:53.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:53.687 "listen_address": { 00:15:53.687 "trtype": "TCP", 00:15:53.687 "adrfam": "IPv4", 00:15:53.687 "traddr": "10.0.0.3", 00:15:53.687 "trsvcid": "4420" 00:15:53.687 }, 00:15:53.687 "peer_address": { 00:15:53.687 "trtype": "TCP", 00:15:53.687 "adrfam": "IPv4", 00:15:53.687 "traddr": "10.0.0.1", 00:15:53.687 "trsvcid": "57306" 00:15:53.687 }, 00:15:53.687 "auth": { 00:15:53.687 "state": "completed", 00:15:53.687 "digest": "sha384", 00:15:53.687 "dhgroup": "ffdhe4096" 00:15:53.687 } 00:15:53.687 } 00:15:53.687 ]' 00:15:53.687 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.687 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.687 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.687 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:53.687 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.687 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.687 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.687 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.254 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:15:54.254 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:15:54.822 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.822 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:54.822 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.822 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.822 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.822 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.822 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:54.822 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:55.389 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:55.390 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.390 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.390 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:55.390 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:55.390 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.390 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.390 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.390 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.390 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.390 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.390 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.390 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.649 00:15:55.649 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.649 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.649 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.907 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.907 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.907 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.907 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.907 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.907 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.907 { 00:15:55.907 "cntlid": 75, 00:15:55.908 "qid": 0, 00:15:55.908 "state": "enabled", 00:15:55.908 "thread": "nvmf_tgt_poll_group_000", 00:15:55.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:55.908 "listen_address": { 00:15:55.908 "trtype": "TCP", 00:15:55.908 "adrfam": "IPv4", 00:15:55.908 "traddr": "10.0.0.3", 00:15:55.908 "trsvcid": "4420" 00:15:55.908 }, 00:15:55.908 "peer_address": { 00:15:55.908 "trtype": "TCP", 00:15:55.908 "adrfam": "IPv4", 00:15:55.908 "traddr": "10.0.0.1", 00:15:55.908 "trsvcid": "57346" 00:15:55.908 }, 00:15:55.908 "auth": { 00:15:55.908 "state": "completed", 00:15:55.908 "digest": "sha384", 00:15:55.908 "dhgroup": "ffdhe4096" 00:15:55.908 } 00:15:55.908 } 00:15:55.908 ]' 00:15:55.908 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.908 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.908 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.166 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:56.166 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.166 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.166 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.166 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.425 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:15:56.425 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:15:57.364 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.365 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:57.365 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.365 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.365 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.365 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.365 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:57.365 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:57.623 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:57.623 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.623 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.623 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:57.623 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:57.623 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.623 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.623 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.623 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.623 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.623 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.623 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.623 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.882 00:15:57.882 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.882 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.882 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.453 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.453 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.453 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.453 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.453 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.453 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.453 { 00:15:58.453 "cntlid": 77, 00:15:58.453 "qid": 0, 00:15:58.453 "state": "enabled", 00:15:58.453 "thread": "nvmf_tgt_poll_group_000", 00:15:58.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:15:58.453 "listen_address": { 00:15:58.453 "trtype": "TCP", 00:15:58.453 "adrfam": "IPv4", 00:15:58.453 "traddr": "10.0.0.3", 00:15:58.453 "trsvcid": "4420" 00:15:58.453 }, 00:15:58.453 "peer_address": { 00:15:58.453 "trtype": "TCP", 00:15:58.453 "adrfam": "IPv4", 00:15:58.453 "traddr": "10.0.0.1", 00:15:58.453 "trsvcid": "57380" 00:15:58.453 }, 00:15:58.453 "auth": { 00:15:58.453 "state": "completed", 00:15:58.453 "digest": "sha384", 00:15:58.453 "dhgroup": "ffdhe4096" 00:15:58.453 } 00:15:58.453 } 00:15:58.453 ]' 00:15:58.453 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.453 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.453 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.453 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.453 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.453 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.453 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.453 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.712 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:15:58.712 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:15:59.649 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.649 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:15:59.649 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.649 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.649 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.649 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.649 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:59.649 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:59.908 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:59.908 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.908 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.908 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:59.908 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:59.908 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.908 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:15:59.908 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.908 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.908 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.908 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:59.908 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.908 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.475 00:16:00.475 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.475 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.475 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.734 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.734 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.734 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.734 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.734 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.734 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.734 { 00:16:00.734 "cntlid": 79, 00:16:00.734 "qid": 0, 00:16:00.734 "state": "enabled", 00:16:00.734 "thread": "nvmf_tgt_poll_group_000", 00:16:00.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:00.734 "listen_address": { 00:16:00.734 "trtype": "TCP", 00:16:00.734 "adrfam": "IPv4", 00:16:00.734 "traddr": "10.0.0.3", 00:16:00.734 "trsvcid": "4420" 00:16:00.734 }, 00:16:00.734 "peer_address": { 00:16:00.734 "trtype": "TCP", 00:16:00.734 "adrfam": "IPv4", 00:16:00.734 "traddr": "10.0.0.1", 00:16:00.734 "trsvcid": "46292" 00:16:00.734 }, 00:16:00.734 "auth": { 00:16:00.734 "state": "completed", 00:16:00.734 "digest": "sha384", 00:16:00.734 "dhgroup": "ffdhe4096" 00:16:00.734 } 00:16:00.734 } 00:16:00.734 ]' 00:16:00.734 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.735 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.735 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.735 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:00.735 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.735 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.735 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.735 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:16:01.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:16:01.871 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.871 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:01.871 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.871 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.871 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.871 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.871 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.871 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:01.871 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:02.131 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:02.131 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.131 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.131 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:02.131 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:02.131 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.131 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.131 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.131 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.131 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.131 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.131 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.131 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.700 00:16:02.700 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.700 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.700 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.266 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.266 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.266 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.266 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.266 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.266 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.266 { 00:16:03.266 "cntlid": 81, 00:16:03.266 "qid": 0, 00:16:03.266 "state": "enabled", 00:16:03.266 "thread": "nvmf_tgt_poll_group_000", 00:16:03.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:03.266 "listen_address": { 00:16:03.266 "trtype": "TCP", 00:16:03.266 "adrfam": "IPv4", 00:16:03.266 "traddr": "10.0.0.3", 00:16:03.266 "trsvcid": "4420" 00:16:03.266 }, 00:16:03.266 "peer_address": { 00:16:03.266 "trtype": "TCP", 00:16:03.266 "adrfam": "IPv4", 00:16:03.266 "traddr": "10.0.0.1", 00:16:03.266 "trsvcid": "46312" 00:16:03.266 }, 00:16:03.266 "auth": { 00:16:03.266 "state": "completed", 00:16:03.266 "digest": "sha384", 00:16:03.266 "dhgroup": "ffdhe6144" 00:16:03.266 } 00:16:03.266 } 00:16:03.266 ]' 00:16:03.266 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.266 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.266 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.266 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:03.266 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.266 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.266 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.266 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.524 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:16:03.524 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:16:04.459 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.459 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:04.459 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.459 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.459 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.459 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.459 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:04.459 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:04.459 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:04.718 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.718 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.718 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:04.718 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:04.718 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.718 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.718 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.718 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.718 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.718 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.718 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.718 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.976 00:16:04.976 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.976 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.976 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.543 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.543 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.543 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.543 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.543 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.543 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.543 { 00:16:05.543 "cntlid": 83, 00:16:05.543 "qid": 0, 00:16:05.543 "state": "enabled", 00:16:05.543 "thread": "nvmf_tgt_poll_group_000", 00:16:05.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:05.543 "listen_address": { 00:16:05.543 "trtype": "TCP", 00:16:05.543 "adrfam": "IPv4", 00:16:05.543 "traddr": "10.0.0.3", 00:16:05.543 "trsvcid": "4420" 00:16:05.543 }, 00:16:05.543 "peer_address": { 00:16:05.543 "trtype": "TCP", 00:16:05.543 "adrfam": "IPv4", 00:16:05.543 "traddr": "10.0.0.1", 00:16:05.543 "trsvcid": "46342" 00:16:05.543 }, 00:16:05.543 "auth": { 00:16:05.543 "state": "completed", 00:16:05.543 "digest": "sha384", 00:16:05.543 "dhgroup": "ffdhe6144" 00:16:05.543 } 00:16:05.543 } 00:16:05.543 ]' 00:16:05.543 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.543 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.543 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.543 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.543 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.543 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.543 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.543 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.110 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:16:06.110 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:16:07.054 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.054 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:07.054 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.054 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.054 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.054 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.054 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.054 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.313 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:07.313 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.313 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:07.313 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:07.313 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:07.313 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.313 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.313 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.313 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.313 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.313 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.313 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.313 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.879 00:16:07.879 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.879 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.879 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.137 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.137 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.137 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.138 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.138 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.138 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.138 { 00:16:08.138 "cntlid": 85, 00:16:08.138 "qid": 0, 00:16:08.138 "state": "enabled", 00:16:08.138 "thread": "nvmf_tgt_poll_group_000", 00:16:08.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:08.138 "listen_address": { 00:16:08.138 "trtype": "TCP", 00:16:08.138 "adrfam": "IPv4", 00:16:08.138 "traddr": "10.0.0.3", 00:16:08.138 "trsvcid": "4420" 00:16:08.138 }, 00:16:08.138 "peer_address": { 00:16:08.138 "trtype": "TCP", 00:16:08.138 "adrfam": "IPv4", 00:16:08.138 "traddr": "10.0.0.1", 00:16:08.138 "trsvcid": "46372" 00:16:08.138 }, 00:16:08.138 "auth": { 00:16:08.138 "state": "completed", 00:16:08.138 "digest": "sha384", 00:16:08.138 "dhgroup": "ffdhe6144" 00:16:08.138 } 00:16:08.138 } 00:16:08.138 ]' 00:16:08.138 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.138 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.138 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.138 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:08.138 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.400 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.400 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.400 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.660 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:16:08.660 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:16:09.228 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.228 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:09.228 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.228 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.228 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.228 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.228 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.228 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.486 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:09.487 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.487 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.487 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:09.487 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:09.487 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.487 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:16:09.487 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.487 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.487 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.487 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:09.487 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.487 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.054 00:16:10.054 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.054 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.054 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.313 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.313 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.313 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.313 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.313 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.313 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.313 { 00:16:10.313 "cntlid": 87, 00:16:10.313 "qid": 0, 00:16:10.313 "state": "enabled", 00:16:10.313 "thread": "nvmf_tgt_poll_group_000", 00:16:10.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:10.313 "listen_address": { 00:16:10.313 "trtype": "TCP", 00:16:10.313 "adrfam": "IPv4", 00:16:10.313 "traddr": "10.0.0.3", 00:16:10.313 "trsvcid": "4420" 00:16:10.313 }, 00:16:10.313 "peer_address": { 00:16:10.313 "trtype": "TCP", 00:16:10.313 "adrfam": "IPv4", 00:16:10.313 "traddr": "10.0.0.1", 00:16:10.313 "trsvcid": "53088" 00:16:10.313 }, 00:16:10.313 "auth": { 00:16:10.313 "state": "completed", 00:16:10.313 "digest": "sha384", 00:16:10.313 "dhgroup": "ffdhe6144" 00:16:10.313 } 00:16:10.313 } 00:16:10.313 ]' 00:16:10.313 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.572 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.572 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.572 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.572 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.572 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.572 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.572 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.831 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:16:10.831 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.766 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.720 00:16:12.720 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.720 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.720 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.979 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.979 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.979 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.979 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.979 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.979 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.979 { 00:16:12.979 "cntlid": 89, 00:16:12.979 "qid": 0, 00:16:12.979 "state": "enabled", 00:16:12.979 "thread": "nvmf_tgt_poll_group_000", 00:16:12.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:12.979 "listen_address": { 00:16:12.979 "trtype": "TCP", 00:16:12.979 "adrfam": "IPv4", 00:16:12.979 "traddr": "10.0.0.3", 00:16:12.979 "trsvcid": "4420" 00:16:12.979 }, 00:16:12.979 "peer_address": { 00:16:12.979 "trtype": "TCP", 00:16:12.979 "adrfam": "IPv4", 00:16:12.979 "traddr": "10.0.0.1", 00:16:12.979 "trsvcid": "53108" 00:16:12.979 }, 00:16:12.979 "auth": { 00:16:12.979 "state": "completed", 00:16:12.979 "digest": "sha384", 00:16:12.979 "dhgroup": "ffdhe8192" 00:16:12.979 } 00:16:12.979 } 00:16:12.979 ]' 00:16:12.979 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.979 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.979 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.979 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.979 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.979 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.979 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.979 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.545 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:16:13.545 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:16:14.111 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.111 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:14.111 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.111 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.111 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.111 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.111 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:14.111 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:14.370 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:14.370 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.370 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.370 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:14.370 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:14.370 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.370 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.370 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.370 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.370 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.370 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.370 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.370 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.304 00:16:15.304 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.304 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.304 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.562 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.562 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.562 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.562 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.562 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.562 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.562 { 00:16:15.562 "cntlid": 91, 00:16:15.562 "qid": 0, 00:16:15.562 "state": "enabled", 00:16:15.562 "thread": "nvmf_tgt_poll_group_000", 00:16:15.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:15.562 "listen_address": { 00:16:15.562 "trtype": "TCP", 00:16:15.562 "adrfam": "IPv4", 00:16:15.562 "traddr": "10.0.0.3", 00:16:15.562 "trsvcid": "4420" 00:16:15.562 }, 00:16:15.562 "peer_address": { 00:16:15.563 "trtype": "TCP", 00:16:15.563 "adrfam": "IPv4", 00:16:15.563 "traddr": "10.0.0.1", 00:16:15.563 "trsvcid": "53126" 00:16:15.563 }, 00:16:15.563 "auth": { 00:16:15.563 "state": "completed", 00:16:15.563 "digest": "sha384", 00:16:15.563 "dhgroup": "ffdhe8192" 00:16:15.563 } 00:16:15.563 } 00:16:15.563 ]' 00:16:15.563 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.563 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.563 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.563 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.563 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.563 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.563 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.563 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.128 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:16:16.128 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:16:16.734 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.734 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:16.734 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.734 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.734 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.734 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.734 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.734 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.992 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:16.992 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.992 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.992 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:16.992 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:16.992 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.992 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.992 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.992 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.992 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.992 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.992 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.992 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.926 00:16:17.926 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.926 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.926 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.184 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.184 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.184 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.184 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.184 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.184 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.184 { 00:16:18.184 "cntlid": 93, 00:16:18.184 "qid": 0, 00:16:18.184 "state": "enabled", 00:16:18.184 "thread": "nvmf_tgt_poll_group_000", 00:16:18.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:18.184 "listen_address": { 00:16:18.184 "trtype": "TCP", 00:16:18.184 "adrfam": "IPv4", 00:16:18.184 "traddr": "10.0.0.3", 00:16:18.184 "trsvcid": "4420" 00:16:18.184 }, 00:16:18.184 "peer_address": { 00:16:18.184 "trtype": "TCP", 00:16:18.184 "adrfam": "IPv4", 00:16:18.184 "traddr": "10.0.0.1", 00:16:18.184 "trsvcid": "53154" 00:16:18.184 }, 00:16:18.184 "auth": { 00:16:18.184 "state": "completed", 00:16:18.184 "digest": "sha384", 00:16:18.184 "dhgroup": "ffdhe8192" 00:16:18.184 } 00:16:18.184 } 00:16:18.184 ]' 00:16:18.184 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.184 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.184 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.184 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:18.184 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.184 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.184 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.184 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.754 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:16:18.754 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:16:19.322 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.322 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:19.322 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.322 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.322 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.322 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.322 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:19.322 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:19.579 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:19.579 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.579 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.579 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:19.579 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:19.579 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.579 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:16:19.579 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.579 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.579 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.579 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:19.579 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.579 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.514 00:16:20.514 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.514 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.514 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.515 { 00:16:20.515 "cntlid": 95, 00:16:20.515 "qid": 0, 00:16:20.515 "state": "enabled", 00:16:20.515 "thread": "nvmf_tgt_poll_group_000", 00:16:20.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:20.515 "listen_address": { 00:16:20.515 "trtype": "TCP", 00:16:20.515 "adrfam": "IPv4", 00:16:20.515 "traddr": "10.0.0.3", 00:16:20.515 "trsvcid": "4420" 00:16:20.515 }, 00:16:20.515 "peer_address": { 00:16:20.515 "trtype": "TCP", 00:16:20.515 "adrfam": "IPv4", 00:16:20.515 "traddr": "10.0.0.1", 00:16:20.515 "trsvcid": "35328" 00:16:20.515 }, 00:16:20.515 "auth": { 00:16:20.515 "state": "completed", 00:16:20.515 "digest": "sha384", 00:16:20.515 "dhgroup": "ffdhe8192" 00:16:20.515 } 00:16:20.515 } 00:16:20.515 ]' 00:16:20.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.773 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.773 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.773 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.773 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.773 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.773 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.773 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.031 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:16:21.031 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.038 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.606 00:16:22.606 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.606 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.606 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.864 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.864 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.864 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.864 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.864 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.864 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.864 { 00:16:22.864 "cntlid": 97, 00:16:22.864 "qid": 0, 00:16:22.864 "state": "enabled", 00:16:22.864 "thread": "nvmf_tgt_poll_group_000", 00:16:22.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:22.864 "listen_address": { 00:16:22.864 "trtype": "TCP", 00:16:22.864 "adrfam": "IPv4", 00:16:22.864 "traddr": "10.0.0.3", 00:16:22.864 "trsvcid": "4420" 00:16:22.864 }, 00:16:22.864 "peer_address": { 00:16:22.864 "trtype": "TCP", 00:16:22.864 "adrfam": "IPv4", 00:16:22.864 "traddr": "10.0.0.1", 00:16:22.864 "trsvcid": "35356" 00:16:22.864 }, 00:16:22.864 "auth": { 00:16:22.864 "state": "completed", 00:16:22.864 "digest": "sha512", 00:16:22.864 "dhgroup": "null" 00:16:22.864 } 00:16:22.864 } 00:16:22.864 ]' 00:16:22.864 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.864 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.864 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.864 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:22.864 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.864 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.864 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.864 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.430 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:16:23.430 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:16:23.997 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.997 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:23.997 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.997 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.997 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.997 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.997 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:23.997 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:24.256 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:24.256 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.256 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.256 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:24.256 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:24.256 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.256 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.256 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.256 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.256 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.256 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.256 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.256 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.833 00:16:24.833 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.833 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.833 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.091 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.091 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.091 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.091 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.091 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.091 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.091 { 00:16:25.091 "cntlid": 99, 00:16:25.091 "qid": 0, 00:16:25.091 "state": "enabled", 00:16:25.091 "thread": "nvmf_tgt_poll_group_000", 00:16:25.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:25.091 "listen_address": { 00:16:25.091 "trtype": "TCP", 00:16:25.091 "adrfam": "IPv4", 00:16:25.091 "traddr": "10.0.0.3", 00:16:25.091 "trsvcid": "4420" 00:16:25.091 }, 00:16:25.091 "peer_address": { 00:16:25.091 "trtype": "TCP", 00:16:25.091 "adrfam": "IPv4", 00:16:25.091 "traddr": "10.0.0.1", 00:16:25.091 "trsvcid": "35374" 00:16:25.091 }, 00:16:25.091 "auth": { 00:16:25.091 "state": "completed", 00:16:25.091 "digest": "sha512", 00:16:25.091 "dhgroup": "null" 00:16:25.091 } 00:16:25.091 } 00:16:25.091 ]' 00:16:25.091 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.091 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.091 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.091 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:25.091 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.091 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.091 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.091 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.696 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:16:25.696 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:16:26.264 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.264 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:26.264 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.264 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.264 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.264 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.264 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.264 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.830 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:26.830 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.830 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.830 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:26.831 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:26.831 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.831 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.831 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.831 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.831 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.831 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.831 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.831 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.089 00:16:27.089 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.089 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.089 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.654 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.654 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.654 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.654 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.654 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.654 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.654 { 00:16:27.654 "cntlid": 101, 00:16:27.654 "qid": 0, 00:16:27.654 "state": "enabled", 00:16:27.654 "thread": "nvmf_tgt_poll_group_000", 00:16:27.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:27.654 "listen_address": { 00:16:27.654 "trtype": "TCP", 00:16:27.654 "adrfam": "IPv4", 00:16:27.654 "traddr": "10.0.0.3", 00:16:27.654 "trsvcid": "4420" 00:16:27.654 }, 00:16:27.654 "peer_address": { 00:16:27.654 "trtype": "TCP", 00:16:27.654 "adrfam": "IPv4", 00:16:27.654 "traddr": "10.0.0.1", 00:16:27.654 "trsvcid": "35394" 00:16:27.654 }, 00:16:27.654 "auth": { 00:16:27.654 "state": "completed", 00:16:27.654 "digest": "sha512", 00:16:27.654 "dhgroup": "null" 00:16:27.654 } 00:16:27.654 } 00:16:27.654 ]' 00:16:27.654 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.654 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.654 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.654 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:27.654 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.654 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.654 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.654 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.912 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:16:27.912 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:16:28.846 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.846 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:28.846 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.846 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.846 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.846 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.846 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.846 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:29.105 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:29.105 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.105 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.105 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:29.105 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.105 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.105 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:16:29.105 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.105 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.105 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.105 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.105 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.105 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.671 00:16:29.671 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.671 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.671 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.029 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.029 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.029 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.029 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.029 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.029 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.029 { 00:16:30.029 "cntlid": 103, 00:16:30.029 "qid": 0, 00:16:30.029 "state": "enabled", 00:16:30.029 "thread": "nvmf_tgt_poll_group_000", 00:16:30.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:30.029 "listen_address": { 00:16:30.029 "trtype": "TCP", 00:16:30.029 "adrfam": "IPv4", 00:16:30.029 "traddr": "10.0.0.3", 00:16:30.029 "trsvcid": "4420" 00:16:30.029 }, 00:16:30.029 "peer_address": { 00:16:30.029 "trtype": "TCP", 00:16:30.029 "adrfam": "IPv4", 00:16:30.029 "traddr": "10.0.0.1", 00:16:30.029 "trsvcid": "48948" 00:16:30.029 }, 00:16:30.029 "auth": { 00:16:30.029 "state": "completed", 00:16:30.029 "digest": "sha512", 00:16:30.029 "dhgroup": "null" 00:16:30.029 } 00:16:30.029 } 00:16:30.029 ]' 00:16:30.029 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.029 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.029 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.029 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:30.029 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.029 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.029 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.029 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.287 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:16:30.287 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:16:31.224 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.224 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:31.224 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.224 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.224 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.224 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.224 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.224 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:31.224 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:31.224 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:31.224 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.224 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.224 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:31.224 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.224 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.224 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.224 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.224 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.483 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.484 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.484 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.484 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.742 00:16:31.742 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.742 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.742 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.000 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.000 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.000 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.000 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.000 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.000 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.000 { 00:16:32.000 "cntlid": 105, 00:16:32.000 "qid": 0, 00:16:32.000 "state": "enabled", 00:16:32.000 "thread": "nvmf_tgt_poll_group_000", 00:16:32.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:32.000 "listen_address": { 00:16:32.000 "trtype": "TCP", 00:16:32.000 "adrfam": "IPv4", 00:16:32.000 "traddr": "10.0.0.3", 00:16:32.000 "trsvcid": "4420" 00:16:32.000 }, 00:16:32.000 "peer_address": { 00:16:32.000 "trtype": "TCP", 00:16:32.000 "adrfam": "IPv4", 00:16:32.000 "traddr": "10.0.0.1", 00:16:32.000 "trsvcid": "48978" 00:16:32.000 }, 00:16:32.000 "auth": { 00:16:32.000 "state": "completed", 00:16:32.000 "digest": "sha512", 00:16:32.000 "dhgroup": "ffdhe2048" 00:16:32.000 } 00:16:32.000 } 00:16:32.000 ]' 00:16:32.000 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.000 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.000 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.259 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.259 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.259 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.259 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.259 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.517 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:16:32.518 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:16:33.463 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.463 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:33.463 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.463 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.463 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.463 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.463 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.463 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.722 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:33.722 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.722 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.722 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:33.722 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:33.722 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.722 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.722 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.722 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.722 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.722 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.722 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.722 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.018 00:16:34.018 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.018 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.018 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.302 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.302 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.302 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.302 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.302 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.302 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.302 { 00:16:34.302 "cntlid": 107, 00:16:34.302 "qid": 0, 00:16:34.302 "state": "enabled", 00:16:34.302 "thread": "nvmf_tgt_poll_group_000", 00:16:34.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:34.302 "listen_address": { 00:16:34.302 "trtype": "TCP", 00:16:34.302 "adrfam": "IPv4", 00:16:34.302 "traddr": "10.0.0.3", 00:16:34.302 "trsvcid": "4420" 00:16:34.302 }, 00:16:34.302 "peer_address": { 00:16:34.302 "trtype": "TCP", 00:16:34.302 "adrfam": "IPv4", 00:16:34.302 "traddr": "10.0.0.1", 00:16:34.302 "trsvcid": "49004" 00:16:34.302 }, 00:16:34.302 "auth": { 00:16:34.302 "state": "completed", 00:16:34.302 "digest": "sha512", 00:16:34.302 "dhgroup": "ffdhe2048" 00:16:34.302 } 00:16:34.302 } 00:16:34.302 ]' 00:16:34.302 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.302 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.302 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.560 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.560 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.560 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.560 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.560 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.818 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:16:34.818 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.753 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.011 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.011 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.011 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.011 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.269 00:16:36.269 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.269 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.269 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.527 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.527 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.527 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.527 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.786 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.786 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.786 { 00:16:36.786 "cntlid": 109, 00:16:36.786 "qid": 0, 00:16:36.786 "state": "enabled", 00:16:36.786 "thread": "nvmf_tgt_poll_group_000", 00:16:36.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:36.786 "listen_address": { 00:16:36.786 "trtype": "TCP", 00:16:36.786 "adrfam": "IPv4", 00:16:36.786 "traddr": "10.0.0.3", 00:16:36.786 "trsvcid": "4420" 00:16:36.786 }, 00:16:36.786 "peer_address": { 00:16:36.786 "trtype": "TCP", 00:16:36.786 "adrfam": "IPv4", 00:16:36.786 "traddr": "10.0.0.1", 00:16:36.786 "trsvcid": "49036" 00:16:36.786 }, 00:16:36.786 "auth": { 00:16:36.786 "state": "completed", 00:16:36.786 "digest": "sha512", 00:16:36.786 "dhgroup": "ffdhe2048" 00:16:36.786 } 00:16:36.786 } 00:16:36.786 ]' 00:16:36.786 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.786 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.786 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.786 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.786 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.786 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.786 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.786 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.044 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:16:37.044 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.978 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.544 00:16:38.544 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.544 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.544 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.802 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.802 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.802 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.802 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.802 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.802 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.802 { 00:16:38.802 "cntlid": 111, 00:16:38.802 "qid": 0, 00:16:38.802 "state": "enabled", 00:16:38.802 "thread": "nvmf_tgt_poll_group_000", 00:16:38.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:38.802 "listen_address": { 00:16:38.802 "trtype": "TCP", 00:16:38.802 "adrfam": "IPv4", 00:16:38.802 "traddr": "10.0.0.3", 00:16:38.802 "trsvcid": "4420" 00:16:38.802 }, 00:16:38.802 "peer_address": { 00:16:38.802 "trtype": "TCP", 00:16:38.802 "adrfam": "IPv4", 00:16:38.802 "traddr": "10.0.0.1", 00:16:38.802 "trsvcid": "57420" 00:16:38.802 }, 00:16:38.802 "auth": { 00:16:38.802 "state": "completed", 00:16:38.802 "digest": "sha512", 00:16:38.802 "dhgroup": "ffdhe2048" 00:16:38.802 } 00:16:38.802 } 00:16:38.802 ]' 00:16:38.802 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.802 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.802 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.802 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.802 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.060 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.060 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.060 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.318 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:16:39.318 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:16:39.884 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.884 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:39.884 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.884 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.884 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.884 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.884 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.884 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:39.884 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:40.448 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:40.448 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.448 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.448 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.448 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:40.448 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.448 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.448 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.448 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.448 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.448 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.448 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.448 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.707 00:16:40.707 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.707 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.707 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.965 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.965 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.965 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.965 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.965 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.965 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.965 { 00:16:40.965 "cntlid": 113, 00:16:40.965 "qid": 0, 00:16:40.965 "state": "enabled", 00:16:40.965 "thread": "nvmf_tgt_poll_group_000", 00:16:40.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:40.965 "listen_address": { 00:16:40.965 "trtype": "TCP", 00:16:40.965 "adrfam": "IPv4", 00:16:40.965 "traddr": "10.0.0.3", 00:16:40.965 "trsvcid": "4420" 00:16:40.965 }, 00:16:40.965 "peer_address": { 00:16:40.965 "trtype": "TCP", 00:16:40.965 "adrfam": "IPv4", 00:16:40.965 "traddr": "10.0.0.1", 00:16:40.965 "trsvcid": "57450" 00:16:40.965 }, 00:16:40.965 "auth": { 00:16:40.965 "state": "completed", 00:16:40.965 "digest": "sha512", 00:16:40.965 "dhgroup": "ffdhe3072" 00:16:40.965 } 00:16:40.965 } 00:16:40.965 ]' 00:16:40.965 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.224 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.224 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.224 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.224 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.224 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.224 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.224 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.482 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:16:41.482 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:16:42.418 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.418 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:42.418 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.418 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.418 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.418 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.418 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:42.418 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:42.676 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:42.676 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.676 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.676 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:42.676 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:42.676 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.676 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.676 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.676 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.677 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.677 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.677 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.677 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.935 00:16:42.935 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.935 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.935 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.193 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.193 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.193 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.193 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.193 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.193 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.193 { 00:16:43.193 "cntlid": 115, 00:16:43.193 "qid": 0, 00:16:43.193 "state": "enabled", 00:16:43.193 "thread": "nvmf_tgt_poll_group_000", 00:16:43.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:43.193 "listen_address": { 00:16:43.193 "trtype": "TCP", 00:16:43.193 "adrfam": "IPv4", 00:16:43.193 "traddr": "10.0.0.3", 00:16:43.193 "trsvcid": "4420" 00:16:43.193 }, 00:16:43.193 "peer_address": { 00:16:43.193 "trtype": "TCP", 00:16:43.193 "adrfam": "IPv4", 00:16:43.193 "traddr": "10.0.0.1", 00:16:43.193 "trsvcid": "57468" 00:16:43.193 }, 00:16:43.193 "auth": { 00:16:43.193 "state": "completed", 00:16:43.193 "digest": "sha512", 00:16:43.193 "dhgroup": "ffdhe3072" 00:16:43.193 } 00:16:43.193 } 00:16:43.193 ]' 00:16:43.193 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.193 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.193 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.451 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:43.451 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.451 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.451 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.451 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.709 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:16:43.709 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:16:44.287 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.287 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:44.287 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.287 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.288 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.288 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.288 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:44.288 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:44.561 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:44.561 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.561 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.561 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:44.561 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.561 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.561 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.561 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.561 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.561 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.561 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.562 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.562 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.128 00:16:45.128 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.128 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.128 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.387 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.387 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.387 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.387 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.387 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.387 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.387 { 00:16:45.387 "cntlid": 117, 00:16:45.387 "qid": 0, 00:16:45.387 "state": "enabled", 00:16:45.387 "thread": "nvmf_tgt_poll_group_000", 00:16:45.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:45.387 "listen_address": { 00:16:45.387 "trtype": "TCP", 00:16:45.387 "adrfam": "IPv4", 00:16:45.387 "traddr": "10.0.0.3", 00:16:45.387 "trsvcid": "4420" 00:16:45.387 }, 00:16:45.387 "peer_address": { 00:16:45.387 "trtype": "TCP", 00:16:45.387 "adrfam": "IPv4", 00:16:45.387 "traddr": "10.0.0.1", 00:16:45.387 "trsvcid": "57490" 00:16:45.387 }, 00:16:45.387 "auth": { 00:16:45.387 "state": "completed", 00:16:45.387 "digest": "sha512", 00:16:45.387 "dhgroup": "ffdhe3072" 00:16:45.387 } 00:16:45.387 } 00:16:45.387 ]' 00:16:45.387 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.387 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.387 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.387 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:45.387 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.646 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.646 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.646 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.904 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:16:45.904 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:16:46.470 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.470 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:46.470 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.470 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.728 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.728 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.728 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.728 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.986 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:46.986 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.986 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.986 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:46.986 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.986 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.986 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:16:46.986 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.986 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.986 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.986 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.986 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.986 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.245 00:16:47.245 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.245 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.245 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.812 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.812 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.812 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.812 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.812 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.812 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.812 { 00:16:47.812 "cntlid": 119, 00:16:47.812 "qid": 0, 00:16:47.812 "state": "enabled", 00:16:47.812 "thread": "nvmf_tgt_poll_group_000", 00:16:47.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:47.812 "listen_address": { 00:16:47.812 "trtype": "TCP", 00:16:47.812 "adrfam": "IPv4", 00:16:47.812 "traddr": "10.0.0.3", 00:16:47.812 "trsvcid": "4420" 00:16:47.812 }, 00:16:47.812 "peer_address": { 00:16:47.812 "trtype": "TCP", 00:16:47.812 "adrfam": "IPv4", 00:16:47.812 "traddr": "10.0.0.1", 00:16:47.812 "trsvcid": "57512" 00:16:47.812 }, 00:16:47.812 "auth": { 00:16:47.812 "state": "completed", 00:16:47.812 "digest": "sha512", 00:16:47.812 "dhgroup": "ffdhe3072" 00:16:47.812 } 00:16:47.812 } 00:16:47.812 ]' 00:16:47.812 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.812 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.812 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.812 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:47.812 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.812 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.812 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.812 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.071 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:16:48.071 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:16:49.005 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.005 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:49.005 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.005 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.005 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.005 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.005 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.005 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:49.005 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:49.263 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:49.263 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.263 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.263 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:49.263 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.263 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.263 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.263 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.263 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.263 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.263 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.263 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.263 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.522 00:16:49.522 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.522 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.522 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.781 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.781 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.781 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.781 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.781 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.781 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.781 { 00:16:49.781 "cntlid": 121, 00:16:49.781 "qid": 0, 00:16:49.781 "state": "enabled", 00:16:49.781 "thread": "nvmf_tgt_poll_group_000", 00:16:49.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:49.781 "listen_address": { 00:16:49.781 "trtype": "TCP", 00:16:49.781 "adrfam": "IPv4", 00:16:49.781 "traddr": "10.0.0.3", 00:16:49.781 "trsvcid": "4420" 00:16:49.781 }, 00:16:49.781 "peer_address": { 00:16:49.781 "trtype": "TCP", 00:16:49.781 "adrfam": "IPv4", 00:16:49.781 "traddr": "10.0.0.1", 00:16:49.781 "trsvcid": "46002" 00:16:49.781 }, 00:16:49.781 "auth": { 00:16:49.781 "state": "completed", 00:16:49.781 "digest": "sha512", 00:16:49.781 "dhgroup": "ffdhe4096" 00:16:49.781 } 00:16:49.781 } 00:16:49.781 ]' 00:16:49.781 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.781 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.781 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.781 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.781 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.096 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.096 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.096 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.353 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:16:50.353 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:16:50.919 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.919 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:50.919 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.919 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.919 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.919 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.919 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:50.920 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:51.177 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:51.177 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.177 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.177 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:51.177 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:51.177 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.177 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.177 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.177 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.178 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.178 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.178 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.178 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.744 00:16:51.744 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.744 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.744 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.002 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.002 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.002 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.002 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.002 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.002 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.002 { 00:16:52.002 "cntlid": 123, 00:16:52.002 "qid": 0, 00:16:52.002 "state": "enabled", 00:16:52.002 "thread": "nvmf_tgt_poll_group_000", 00:16:52.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:52.002 "listen_address": { 00:16:52.002 "trtype": "TCP", 00:16:52.002 "adrfam": "IPv4", 00:16:52.002 "traddr": "10.0.0.3", 00:16:52.002 "trsvcid": "4420" 00:16:52.002 }, 00:16:52.002 "peer_address": { 00:16:52.002 "trtype": "TCP", 00:16:52.002 "adrfam": "IPv4", 00:16:52.002 "traddr": "10.0.0.1", 00:16:52.002 "trsvcid": "46036" 00:16:52.002 }, 00:16:52.002 "auth": { 00:16:52.002 "state": "completed", 00:16:52.002 "digest": "sha512", 00:16:52.002 "dhgroup": "ffdhe4096" 00:16:52.002 } 00:16:52.002 } 00:16:52.002 ]' 00:16:52.002 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.002 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.002 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.002 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.002 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.260 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.260 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.260 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.519 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:16:52.519 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:16:53.085 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.085 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:53.086 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.086 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.086 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.086 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.086 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:53.086 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:53.653 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:53.653 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.653 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.653 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:53.653 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.653 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.653 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.653 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.653 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.653 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.653 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.653 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.653 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.912 00:16:53.912 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.912 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.912 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.170 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.170 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.170 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.170 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.170 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.170 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.170 { 00:16:54.170 "cntlid": 125, 00:16:54.170 "qid": 0, 00:16:54.170 "state": "enabled", 00:16:54.170 "thread": "nvmf_tgt_poll_group_000", 00:16:54.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:54.170 "listen_address": { 00:16:54.170 "trtype": "TCP", 00:16:54.170 "adrfam": "IPv4", 00:16:54.170 "traddr": "10.0.0.3", 00:16:54.170 "trsvcid": "4420" 00:16:54.170 }, 00:16:54.170 "peer_address": { 00:16:54.170 "trtype": "TCP", 00:16:54.170 "adrfam": "IPv4", 00:16:54.170 "traddr": "10.0.0.1", 00:16:54.170 "trsvcid": "46058" 00:16:54.170 }, 00:16:54.170 "auth": { 00:16:54.170 "state": "completed", 00:16:54.170 "digest": "sha512", 00:16:54.170 "dhgroup": "ffdhe4096" 00:16:54.170 } 00:16:54.170 } 00:16:54.170 ]' 00:16:54.170 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.170 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.170 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.430 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.430 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.430 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.430 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.430 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.689 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:16:54.689 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:16:55.623 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.623 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:55.623 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.623 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.623 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.623 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.623 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:55.623 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:55.883 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:55.883 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.883 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.883 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:55.883 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.883 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.883 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:16:55.883 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.883 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.883 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.883 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.883 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.883 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.141 00:16:56.141 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.141 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.141 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.400 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.400 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.400 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.400 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.659 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.659 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.659 { 00:16:56.659 "cntlid": 127, 00:16:56.659 "qid": 0, 00:16:56.659 "state": "enabled", 00:16:56.659 "thread": "nvmf_tgt_poll_group_000", 00:16:56.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:56.659 "listen_address": { 00:16:56.659 "trtype": "TCP", 00:16:56.659 "adrfam": "IPv4", 00:16:56.659 "traddr": "10.0.0.3", 00:16:56.659 "trsvcid": "4420" 00:16:56.659 }, 00:16:56.659 "peer_address": { 00:16:56.659 "trtype": "TCP", 00:16:56.659 "adrfam": "IPv4", 00:16:56.659 "traddr": "10.0.0.1", 00:16:56.659 "trsvcid": "46080" 00:16:56.659 }, 00:16:56.659 "auth": { 00:16:56.659 "state": "completed", 00:16:56.659 "digest": "sha512", 00:16:56.659 "dhgroup": "ffdhe4096" 00:16:56.659 } 00:16:56.659 } 00:16:56.659 ]' 00:16:56.659 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.659 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.659 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.659 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:56.659 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.659 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.659 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.659 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.921 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:16:56.921 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:16:57.854 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.854 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:16:57.854 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.854 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.854 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.854 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.854 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.854 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:57.854 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:58.112 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:58.112 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.112 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.112 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:58.112 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.112 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.112 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.112 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.112 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.112 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.112 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.112 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.112 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.678 00:16:58.678 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.678 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.678 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.937 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.937 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.937 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.937 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.937 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.937 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.937 { 00:16:58.937 "cntlid": 129, 00:16:58.937 "qid": 0, 00:16:58.937 "state": "enabled", 00:16:58.937 "thread": "nvmf_tgt_poll_group_000", 00:16:58.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:16:58.937 "listen_address": { 00:16:58.937 "trtype": "TCP", 00:16:58.937 "adrfam": "IPv4", 00:16:58.937 "traddr": "10.0.0.3", 00:16:58.937 "trsvcid": "4420" 00:16:58.937 }, 00:16:58.937 "peer_address": { 00:16:58.937 "trtype": "TCP", 00:16:58.937 "adrfam": "IPv4", 00:16:58.937 "traddr": "10.0.0.1", 00:16:58.937 "trsvcid": "58090" 00:16:58.937 }, 00:16:58.937 "auth": { 00:16:58.937 "state": "completed", 00:16:58.937 "digest": "sha512", 00:16:58.937 "dhgroup": "ffdhe6144" 00:16:58.937 } 00:16:58.937 } 00:16:58.937 ]' 00:16:58.937 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.195 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.195 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.195 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.195 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.195 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.195 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.195 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.761 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:16:59.761 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:17:00.326 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.326 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:00.326 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.326 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.326 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.326 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.327 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:00.327 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:00.585 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:00.585 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.585 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.585 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:00.585 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:00.585 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.585 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.585 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.585 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.585 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.585 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.585 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.585 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.150 00:17:01.150 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.150 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.150 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.408 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.408 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.408 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.408 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.408 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.408 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.408 { 00:17:01.408 "cntlid": 131, 00:17:01.408 "qid": 0, 00:17:01.408 "state": "enabled", 00:17:01.408 "thread": "nvmf_tgt_poll_group_000", 00:17:01.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:01.408 "listen_address": { 00:17:01.408 "trtype": "TCP", 00:17:01.408 "adrfam": "IPv4", 00:17:01.408 "traddr": "10.0.0.3", 00:17:01.408 "trsvcid": "4420" 00:17:01.408 }, 00:17:01.408 "peer_address": { 00:17:01.408 "trtype": "TCP", 00:17:01.408 "adrfam": "IPv4", 00:17:01.408 "traddr": "10.0.0.1", 00:17:01.408 "trsvcid": "58120" 00:17:01.408 }, 00:17:01.408 "auth": { 00:17:01.408 "state": "completed", 00:17:01.408 "digest": "sha512", 00:17:01.408 "dhgroup": "ffdhe6144" 00:17:01.408 } 00:17:01.408 } 00:17:01.408 ]' 00:17:01.408 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.408 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.408 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.408 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.408 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.667 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.667 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.667 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.924 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:17:01.925 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:17:02.859 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.859 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:02.859 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.859 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.859 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.859 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.859 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.859 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.146 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:03.146 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.146 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.146 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:03.146 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.146 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.147 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.147 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.147 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.147 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.147 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.147 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.147 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.722 00:17:03.722 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.723 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.723 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.981 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.981 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.981 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.981 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.981 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.981 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.981 { 00:17:03.981 "cntlid": 133, 00:17:03.981 "qid": 0, 00:17:03.981 "state": "enabled", 00:17:03.981 "thread": "nvmf_tgt_poll_group_000", 00:17:03.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:03.981 "listen_address": { 00:17:03.981 "trtype": "TCP", 00:17:03.981 "adrfam": "IPv4", 00:17:03.981 "traddr": "10.0.0.3", 00:17:03.981 "trsvcid": "4420" 00:17:03.981 }, 00:17:03.981 "peer_address": { 00:17:03.981 "trtype": "TCP", 00:17:03.981 "adrfam": "IPv4", 00:17:03.981 "traddr": "10.0.0.1", 00:17:03.981 "trsvcid": "58144" 00:17:03.981 }, 00:17:03.981 "auth": { 00:17:03.981 "state": "completed", 00:17:03.981 "digest": "sha512", 00:17:03.981 "dhgroup": "ffdhe6144" 00:17:03.981 } 00:17:03.981 } 00:17:03.981 ]' 00:17:03.981 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.981 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.981 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.981 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.981 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.981 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.981 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.981 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.547 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:17:04.547 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:17:05.114 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.114 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:05.114 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.114 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.114 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.114 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.114 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.114 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.679 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:05.679 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.679 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.679 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:05.679 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.679 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.679 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:17:05.679 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.679 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.679 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.679 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.679 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.679 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.938 00:17:05.938 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.938 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.938 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.503 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.503 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.503 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.503 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.503 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.503 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.503 { 00:17:06.503 "cntlid": 135, 00:17:06.503 "qid": 0, 00:17:06.503 "state": "enabled", 00:17:06.503 "thread": "nvmf_tgt_poll_group_000", 00:17:06.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:06.503 "listen_address": { 00:17:06.503 "trtype": "TCP", 00:17:06.503 "adrfam": "IPv4", 00:17:06.503 "traddr": "10.0.0.3", 00:17:06.503 "trsvcid": "4420" 00:17:06.503 }, 00:17:06.503 "peer_address": { 00:17:06.503 "trtype": "TCP", 00:17:06.503 "adrfam": "IPv4", 00:17:06.503 "traddr": "10.0.0.1", 00:17:06.503 "trsvcid": "58166" 00:17:06.503 }, 00:17:06.503 "auth": { 00:17:06.503 "state": "completed", 00:17:06.503 "digest": "sha512", 00:17:06.503 "dhgroup": "ffdhe6144" 00:17:06.503 } 00:17:06.503 } 00:17:06.503 ]' 00:17:06.503 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.503 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.503 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.503 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.503 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.503 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.503 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.503 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.070 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:17:07.070 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:17:07.636 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.636 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:07.636 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.636 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.636 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.636 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.636 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.636 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:07.636 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:07.894 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:07.894 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.894 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.894 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:07.894 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.894 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.894 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.894 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.894 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.894 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.894 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.894 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.894 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.830 00:17:08.830 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.831 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.831 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.397 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.397 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.397 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.397 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.397 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.397 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.397 { 00:17:09.397 "cntlid": 137, 00:17:09.397 "qid": 0, 00:17:09.397 "state": "enabled", 00:17:09.397 "thread": "nvmf_tgt_poll_group_000", 00:17:09.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:09.397 "listen_address": { 00:17:09.397 "trtype": "TCP", 00:17:09.397 "adrfam": "IPv4", 00:17:09.397 "traddr": "10.0.0.3", 00:17:09.397 "trsvcid": "4420" 00:17:09.397 }, 00:17:09.397 "peer_address": { 00:17:09.397 "trtype": "TCP", 00:17:09.397 "adrfam": "IPv4", 00:17:09.397 "traddr": "10.0.0.1", 00:17:09.397 "trsvcid": "50502" 00:17:09.397 }, 00:17:09.397 "auth": { 00:17:09.397 "state": "completed", 00:17:09.397 "digest": "sha512", 00:17:09.397 "dhgroup": "ffdhe8192" 00:17:09.397 } 00:17:09.397 } 00:17:09.397 ]' 00:17:09.397 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.397 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.397 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.397 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:09.397 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.397 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.397 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.397 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.695 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:17:09.695 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:17:10.629 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.629 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:10.629 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.629 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.629 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.629 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.629 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:10.629 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:10.888 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:10.888 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.888 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.888 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:10.888 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:10.888 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.888 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.888 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.888 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.888 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.888 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.888 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.888 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.456 00:17:11.456 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.456 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.456 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.024 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.024 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.024 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.024 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.024 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.024 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.024 { 00:17:12.024 "cntlid": 139, 00:17:12.024 "qid": 0, 00:17:12.024 "state": "enabled", 00:17:12.024 "thread": "nvmf_tgt_poll_group_000", 00:17:12.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:12.024 "listen_address": { 00:17:12.024 "trtype": "TCP", 00:17:12.024 "adrfam": "IPv4", 00:17:12.024 "traddr": "10.0.0.3", 00:17:12.024 "trsvcid": "4420" 00:17:12.024 }, 00:17:12.024 "peer_address": { 00:17:12.024 "trtype": "TCP", 00:17:12.024 "adrfam": "IPv4", 00:17:12.024 "traddr": "10.0.0.1", 00:17:12.024 "trsvcid": "50522" 00:17:12.024 }, 00:17:12.024 "auth": { 00:17:12.024 "state": "completed", 00:17:12.024 "digest": "sha512", 00:17:12.024 "dhgroup": "ffdhe8192" 00:17:12.024 } 00:17:12.024 } 00:17:12.024 ]' 00:17:12.024 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.024 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.024 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.024 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.024 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.024 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.024 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.024 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.282 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:17:12.282 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: --dhchap-ctrl-secret DHHC-1:02:OWVmMjZmYmRjYWRhMzIwMmRhNGU3ZGI2MzcxMzg3MDZiODA4MGIwMjc2NGNkMTMwWN+fRg==: 00:17:13.217 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.217 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:13.218 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.218 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.218 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.218 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.218 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:13.218 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:13.475 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:13.475 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.475 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.475 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.475 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.475 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.475 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.475 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.475 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.475 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.475 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.475 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.475 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.435 00:17:14.435 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.435 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.435 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.435 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.435 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.435 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.435 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.435 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.435 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.435 { 00:17:14.435 "cntlid": 141, 00:17:14.435 "qid": 0, 00:17:14.435 "state": "enabled", 00:17:14.435 "thread": "nvmf_tgt_poll_group_000", 00:17:14.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:14.435 "listen_address": { 00:17:14.435 "trtype": "TCP", 00:17:14.435 "adrfam": "IPv4", 00:17:14.435 "traddr": "10.0.0.3", 00:17:14.435 "trsvcid": "4420" 00:17:14.435 }, 00:17:14.435 "peer_address": { 00:17:14.435 "trtype": "TCP", 00:17:14.435 "adrfam": "IPv4", 00:17:14.435 "traddr": "10.0.0.1", 00:17:14.435 "trsvcid": "50540" 00:17:14.435 }, 00:17:14.435 "auth": { 00:17:14.435 "state": "completed", 00:17:14.435 "digest": "sha512", 00:17:14.435 "dhgroup": "ffdhe8192" 00:17:14.435 } 00:17:14.435 } 00:17:14.435 ]' 00:17:14.435 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.705 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.705 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.705 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.705 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.705 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.705 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.705 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.963 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:17:14.963 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:01:YzU1ODg4ZWYzNTZiOTQzNzkxYWU5ODQ2Y2UxZTI2OTY5me4S: 00:17:15.530 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.530 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:15.530 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.530 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.787 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.787 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.787 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.787 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:16.045 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:16.045 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.045 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.045 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:16.045 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.045 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.045 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:17:16.046 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.046 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.046 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.046 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.046 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.046 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.613 00:17:16.613 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.613 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.613 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.179 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.179 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.179 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.179 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.179 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.179 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.179 { 00:17:17.179 "cntlid": 143, 00:17:17.179 "qid": 0, 00:17:17.179 "state": "enabled", 00:17:17.179 "thread": "nvmf_tgt_poll_group_000", 00:17:17.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:17.179 "listen_address": { 00:17:17.179 "trtype": "TCP", 00:17:17.179 "adrfam": "IPv4", 00:17:17.179 "traddr": "10.0.0.3", 00:17:17.179 "trsvcid": "4420" 00:17:17.179 }, 00:17:17.179 "peer_address": { 00:17:17.179 "trtype": "TCP", 00:17:17.179 "adrfam": "IPv4", 00:17:17.179 "traddr": "10.0.0.1", 00:17:17.179 "trsvcid": "50566" 00:17:17.179 }, 00:17:17.179 "auth": { 00:17:17.179 "state": "completed", 00:17:17.179 "digest": "sha512", 00:17:17.179 "dhgroup": "ffdhe8192" 00:17:17.179 } 00:17:17.179 } 00:17:17.179 ]' 00:17:17.179 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.179 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.179 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.179 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.179 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.179 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.179 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.179 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.438 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:17:17.438 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:17:18.372 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.372 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:18.372 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.372 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.372 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.372 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:18.372 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:18.372 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:18.372 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:18.372 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:18.372 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:18.630 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:18.630 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.630 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.630 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:18.630 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.630 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.630 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.630 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.630 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.630 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.630 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.630 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.630 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.280 00:17:19.280 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.280 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.280 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.538 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.538 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.538 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.538 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.538 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.538 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.538 { 00:17:19.538 "cntlid": 145, 00:17:19.538 "qid": 0, 00:17:19.538 "state": "enabled", 00:17:19.538 "thread": "nvmf_tgt_poll_group_000", 00:17:19.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:19.538 "listen_address": { 00:17:19.538 "trtype": "TCP", 00:17:19.538 "adrfam": "IPv4", 00:17:19.538 "traddr": "10.0.0.3", 00:17:19.538 "trsvcid": "4420" 00:17:19.538 }, 00:17:19.538 "peer_address": { 00:17:19.538 "trtype": "TCP", 00:17:19.538 "adrfam": "IPv4", 00:17:19.538 "traddr": "10.0.0.1", 00:17:19.538 "trsvcid": "47788" 00:17:19.538 }, 00:17:19.538 "auth": { 00:17:19.538 "state": "completed", 00:17:19.538 "digest": "sha512", 00:17:19.538 "dhgroup": "ffdhe8192" 00:17:19.538 } 00:17:19.538 } 00:17:19.538 ]' 00:17:19.538 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.796 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.796 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.796 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.796 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.796 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.796 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.796 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.053 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:17:20.053 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:00:NzBlMmU5ZWFhNjkxYmIwYWNmNzNjNzNmYjJmMjZiNTFjYzdjN2QyMjRmZjNlZjhkwH7BpA==: --dhchap-ctrl-secret DHHC-1:03:NjA0ODQ2ZTVkY2NhZmE1MjVmMGIxYWZkYzA2YjU5MGNkODA5NTE2ZDliYWQyMzhmZGQyZWE0ZmM4N2I3YWEyMYhLDcs=: 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.985 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:20.986 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:20.986 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:21.553 request: 00:17:21.553 { 00:17:21.553 "name": "nvme0", 00:17:21.553 "trtype": "tcp", 00:17:21.553 "traddr": "10.0.0.3", 00:17:21.553 "adrfam": "ipv4", 00:17:21.553 "trsvcid": "4420", 00:17:21.553 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:21.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:21.553 "prchk_reftag": false, 00:17:21.553 "prchk_guard": false, 00:17:21.553 "hdgst": false, 00:17:21.553 "ddgst": false, 00:17:21.553 "dhchap_key": "key2", 00:17:21.553 "allow_unrecognized_csi": false, 00:17:21.553 "method": "bdev_nvme_attach_controller", 00:17:21.553 "req_id": 1 00:17:21.553 } 00:17:21.553 Got JSON-RPC error response 00:17:21.553 response: 00:17:21.553 { 00:17:21.553 "code": -5, 00:17:21.553 "message": "Input/output error" 00:17:21.553 } 00:17:21.553 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:21.553 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:21.553 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:21.553 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:21.553 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:21.553 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.553 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.811 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.811 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.811 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.811 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.811 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.811 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:21.811 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:21.811 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:21.811 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:21.811 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.811 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:21.811 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.811 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:21.811 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:21.811 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:22.378 request: 00:17:22.378 { 00:17:22.378 "name": "nvme0", 00:17:22.378 "trtype": "tcp", 00:17:22.378 "traddr": "10.0.0.3", 00:17:22.378 "adrfam": "ipv4", 00:17:22.378 "trsvcid": "4420", 00:17:22.378 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:22.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:22.378 "prchk_reftag": false, 00:17:22.378 "prchk_guard": false, 00:17:22.378 "hdgst": false, 00:17:22.378 "ddgst": false, 00:17:22.378 "dhchap_key": "key1", 00:17:22.378 "dhchap_ctrlr_key": "ckey2", 00:17:22.378 "allow_unrecognized_csi": false, 00:17:22.378 "method": "bdev_nvme_attach_controller", 00:17:22.378 "req_id": 1 00:17:22.378 } 00:17:22.378 Got JSON-RPC error response 00:17:22.378 response: 00:17:22.378 { 00:17:22.378 "code": -5, 00:17:22.378 "message": "Input/output error" 00:17:22.378 } 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.378 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.311 request: 00:17:23.311 { 00:17:23.311 "name": "nvme0", 00:17:23.311 "trtype": "tcp", 00:17:23.311 "traddr": "10.0.0.3", 00:17:23.311 "adrfam": "ipv4", 00:17:23.311 "trsvcid": "4420", 00:17:23.311 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:23.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:23.311 "prchk_reftag": false, 00:17:23.311 "prchk_guard": false, 00:17:23.311 "hdgst": false, 00:17:23.311 "ddgst": false, 00:17:23.311 "dhchap_key": "key1", 00:17:23.311 "dhchap_ctrlr_key": "ckey1", 00:17:23.311 "allow_unrecognized_csi": false, 00:17:23.311 "method": "bdev_nvme_attach_controller", 00:17:23.311 "req_id": 1 00:17:23.311 } 00:17:23.311 Got JSON-RPC error response 00:17:23.311 response: 00:17:23.311 { 00:17:23.311 "code": -5, 00:17:23.311 "message": "Input/output error" 00:17:23.311 } 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 70409 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70409 ']' 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70409 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70409 00:17:23.311 killing process with pid 70409 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70409' 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70409 00:17:23.311 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70409 00:17:24.247 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:24.247 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:24.247 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:24.247 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.247 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=73628 00:17:24.247 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:24.247 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 73628 00:17:24.247 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 73628 ']' 00:17:24.247 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.247 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.247 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.247 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.247 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.624 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.624 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:25.624 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:25.624 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:25.624 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.624 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.624 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:25.624 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 73628 00:17:25.625 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 73628 ']' 00:17:25.625 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.625 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.625 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.625 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.625 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.883 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.883 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:25.883 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:25.883 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.883 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.141 null0 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wnx 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.I3C ]] 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.I3C 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.63t 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.kI5 ]] 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kI5 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Vl2 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.doO ]] 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.doO 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.921 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:17:26.141 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.142 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.142 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.142 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:26.142 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.142 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.518 nvme0n1 00:17:27.518 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.518 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.518 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.518 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.518 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.518 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.518 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.518 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.518 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.518 { 00:17:27.518 "cntlid": 1, 00:17:27.518 "qid": 0, 00:17:27.518 "state": "enabled", 00:17:27.518 "thread": "nvmf_tgt_poll_group_000", 00:17:27.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:27.518 "listen_address": { 00:17:27.518 "trtype": "TCP", 00:17:27.518 "adrfam": "IPv4", 00:17:27.518 "traddr": "10.0.0.3", 00:17:27.518 "trsvcid": "4420" 00:17:27.518 }, 00:17:27.518 "peer_address": { 00:17:27.518 "trtype": "TCP", 00:17:27.518 "adrfam": "IPv4", 00:17:27.518 "traddr": "10.0.0.1", 00:17:27.518 "trsvcid": "47854" 00:17:27.518 }, 00:17:27.518 "auth": { 00:17:27.518 "state": "completed", 00:17:27.518 "digest": "sha512", 00:17:27.518 "dhgroup": "ffdhe8192" 00:17:27.518 } 00:17:27.518 } 00:17:27.518 ]' 00:17:27.518 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.776 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.776 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.776 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:27.776 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.776 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.776 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.776 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.343 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:17:28.343 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:17:28.909 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.909 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:28.909 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.909 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.909 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.909 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key3 00:17:28.909 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.909 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.909 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.909 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:28.909 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:29.475 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:29.475 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:29.475 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:29.475 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:29.475 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.475 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:29.475 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.475 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.475 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.475 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.733 request: 00:17:29.733 { 00:17:29.733 "name": "nvme0", 00:17:29.733 "trtype": "tcp", 00:17:29.733 "traddr": "10.0.0.3", 00:17:29.733 "adrfam": "ipv4", 00:17:29.733 "trsvcid": "4420", 00:17:29.733 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:29.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:29.733 "prchk_reftag": false, 00:17:29.733 "prchk_guard": false, 00:17:29.733 "hdgst": false, 00:17:29.733 "ddgst": false, 00:17:29.733 "dhchap_key": "key3", 00:17:29.733 "allow_unrecognized_csi": false, 00:17:29.733 "method": "bdev_nvme_attach_controller", 00:17:29.733 "req_id": 1 00:17:29.733 } 00:17:29.733 Got JSON-RPC error response 00:17:29.733 response: 00:17:29.733 { 00:17:29.733 "code": -5, 00:17:29.733 "message": "Input/output error" 00:17:29.733 } 00:17:29.733 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:29.733 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.733 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.733 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.733 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:29.733 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:29.733 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:29.733 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:29.991 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:29.991 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:29.991 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:29.991 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:29.991 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.991 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:29.991 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.991 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.991 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.991 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.249 request: 00:17:30.249 { 00:17:30.249 "name": "nvme0", 00:17:30.249 "trtype": "tcp", 00:17:30.250 "traddr": "10.0.0.3", 00:17:30.250 "adrfam": "ipv4", 00:17:30.250 "trsvcid": "4420", 00:17:30.250 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:30.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:30.250 "prchk_reftag": false, 00:17:30.250 "prchk_guard": false, 00:17:30.250 "hdgst": false, 00:17:30.250 "ddgst": false, 00:17:30.250 "dhchap_key": "key3", 00:17:30.250 "allow_unrecognized_csi": false, 00:17:30.250 "method": "bdev_nvme_attach_controller", 00:17:30.250 "req_id": 1 00:17:30.250 } 00:17:30.250 Got JSON-RPC error response 00:17:30.250 response: 00:17:30.250 { 00:17:30.250 "code": -5, 00:17:30.250 "message": "Input/output error" 00:17:30.250 } 00:17:30.250 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:30.250 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:30.250 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:30.250 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:30.250 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:30.250 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:30.250 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:30.250 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:30.250 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:30.250 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:30.508 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:31.075 request: 00:17:31.075 { 00:17:31.075 "name": "nvme0", 00:17:31.075 "trtype": "tcp", 00:17:31.075 "traddr": "10.0.0.3", 00:17:31.075 "adrfam": "ipv4", 00:17:31.075 "trsvcid": "4420", 00:17:31.075 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:31.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:31.075 "prchk_reftag": false, 00:17:31.075 "prchk_guard": false, 00:17:31.075 "hdgst": false, 00:17:31.075 "ddgst": false, 00:17:31.075 "dhchap_key": "key0", 00:17:31.075 "dhchap_ctrlr_key": "key1", 00:17:31.075 "allow_unrecognized_csi": false, 00:17:31.075 "method": "bdev_nvme_attach_controller", 00:17:31.075 "req_id": 1 00:17:31.075 } 00:17:31.075 Got JSON-RPC error response 00:17:31.075 response: 00:17:31.075 { 00:17:31.075 "code": -5, 00:17:31.075 "message": "Input/output error" 00:17:31.075 } 00:17:31.075 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:31.075 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.075 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.075 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.075 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:31.075 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:31.075 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:31.333 nvme0n1 00:17:31.333 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:31.333 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.333 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:31.592 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.592 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.592 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.159 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 00:17:32.159 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.159 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.159 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.159 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:32.159 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:32.159 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:33.092 nvme0n1 00:17:33.092 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:33.092 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:33.092 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.350 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.350 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:33.350 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.350 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.350 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.350 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:33.350 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.350 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:33.916 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.916 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:17:33.916 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid 20cf3ff5-7c8b-4175-aa20-a641780c6f81 -l 0 --dhchap-secret DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: --dhchap-ctrl-secret DHHC-1:03:YjhmZWIyOGRmYzNiMzBlNzA0ZWRiMTgzMjgyMTRjMjg4NmZmYWZhOGVjZjdhNzc0YjhhZDY3MTAzZjVjMTE3MqSWQx8=: 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:34.890 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:35.826 request: 00:17:35.826 { 00:17:35.826 "name": "nvme0", 00:17:35.826 "trtype": "tcp", 00:17:35.826 "traddr": "10.0.0.3", 00:17:35.826 "adrfam": "ipv4", 00:17:35.826 "trsvcid": "4420", 00:17:35.826 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:35.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81", 00:17:35.826 "prchk_reftag": false, 00:17:35.826 "prchk_guard": false, 00:17:35.826 "hdgst": false, 00:17:35.826 "ddgst": false, 00:17:35.826 "dhchap_key": "key1", 00:17:35.826 "allow_unrecognized_csi": false, 00:17:35.826 "method": "bdev_nvme_attach_controller", 00:17:35.826 "req_id": 1 00:17:35.826 } 00:17:35.826 Got JSON-RPC error response 00:17:35.826 response: 00:17:35.826 { 00:17:35.826 "code": -5, 00:17:35.826 "message": "Input/output error" 00:17:35.826 } 00:17:35.826 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:35.826 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.826 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.826 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.826 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:35.826 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:35.826 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:37.202 nvme0n1 00:17:37.202 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:37.202 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:37.202 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.202 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.202 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.202 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.460 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:37.460 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.460 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.460 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.460 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:37.460 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:37.460 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:38.026 nvme0n1 00:17:38.026 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:38.026 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:38.026 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.285 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.285 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.285 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: '' 2s 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: ]] 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Y2QxMDc4NzMyYzA5MGFmODYyYzZmNjdkMGQ3YThiNDaKRC82: 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:38.543 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: 2s 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:41.077 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: ]] 00:17:41.078 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmRlMTE3MjVmMmFlOWFlMzE1ZTQ0YmE4YTg0OTQ5ZGYwZGE2NzMzNTYxMTBkMTcz6Qcbgw==: 00:17:41.078 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:41.078 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:42.979 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:42.979 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:42.979 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:42.979 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:42.979 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:42.979 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:42.979 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:42.979 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.979 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:42.979 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.979 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.979 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.979 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:42.980 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:42.980 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:43.949 nvme0n1 00:17:43.949 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:43.949 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.949 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.949 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.949 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:43.949 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:44.516 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:44.516 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:44.516 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.775 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.775 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:44.775 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.775 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.775 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.775 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:44.775 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:45.033 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:45.033 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:45.033 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.598 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.598 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:45.598 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.598 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.598 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.599 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:45.599 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:45.599 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:45.599 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:45.599 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.599 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:45.599 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.599 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:45.599 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:46.165 request: 00:17:46.165 { 00:17:46.165 "name": "nvme0", 00:17:46.165 "dhchap_key": "key1", 00:17:46.165 "dhchap_ctrlr_key": "key3", 00:17:46.165 "method": "bdev_nvme_set_keys", 00:17:46.165 "req_id": 1 00:17:46.165 } 00:17:46.165 Got JSON-RPC error response 00:17:46.165 response: 00:17:46.165 { 00:17:46.165 "code": -13, 00:17:46.165 "message": "Permission denied" 00:17:46.165 } 00:17:46.165 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:46.165 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:46.165 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:46.165 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:46.165 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:46.165 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:46.165 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.423 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:46.423 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:47.357 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:47.357 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.357 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:47.922 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:47.922 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:47.922 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.922 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.922 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.922 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:47.923 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:47.923 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:48.862 nvme0n1 00:17:48.862 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:48.862 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.862 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.862 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.862 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:48.862 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:48.862 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:48.862 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:48.862 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.862 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:48.862 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.862 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:48.862 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:49.429 request: 00:17:49.429 { 00:17:49.429 "name": "nvme0", 00:17:49.429 "dhchap_key": "key2", 00:17:49.429 "dhchap_ctrlr_key": "key0", 00:17:49.429 "method": "bdev_nvme_set_keys", 00:17:49.429 "req_id": 1 00:17:49.429 } 00:17:49.429 Got JSON-RPC error response 00:17:49.429 response: 00:17:49.429 { 00:17:49.429 "code": -13, 00:17:49.429 "message": "Permission denied" 00:17:49.429 } 00:17:49.429 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:49.429 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.429 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.429 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.429 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:49.429 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:49.429 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.026 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:50.026 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:50.980 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:50.980 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:50.980 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.238 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:51.238 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:51.238 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:51.238 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 70441 00:17:51.238 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70441 ']' 00:17:51.239 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70441 00:17:51.239 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:51.239 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.239 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70441 00:17:51.239 killing process with pid 70441 00:17:51.239 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:51.239 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:51.239 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70441' 00:17:51.239 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70441 00:17:51.239 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70441 00:17:53.139 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:53.140 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:53.140 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:53.398 rmmod nvme_tcp 00:17:53.398 rmmod nvme_fabrics 00:17:53.398 rmmod nvme_keyring 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 73628 ']' 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 73628 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 73628 ']' 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 73628 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73628 00:17:53.398 killing process with pid 73628 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73628' 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 73628 00:17:53.398 11:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 73628 00:17:54.334 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:54.334 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:54.334 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:54.334 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:54.334 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:54.334 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:54.334 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:54.334 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:54.334 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:54.334 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:54.334 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.wnx /tmp/spdk.key-sha256.63t /tmp/spdk.key-sha384.Vl2 /tmp/spdk.key-sha512.921 /tmp/spdk.key-sha512.I3C /tmp/spdk.key-sha384.kI5 /tmp/spdk.key-sha256.doO '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:17:54.592 00:17:54.592 real 3m33.388s 00:17:54.592 user 8m28.982s 00:17:54.592 sys 0m30.863s 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.592 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.592 ************************************ 00:17:54.592 END TEST nvmf_auth_target 00:17:54.592 ************************************ 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:54.867 ************************************ 00:17:54.867 START TEST nvmf_bdevio_no_huge 00:17:54.867 ************************************ 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:54.867 * Looking for test storage... 00:17:54.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:54.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.867 --rc genhtml_branch_coverage=1 00:17:54.867 --rc genhtml_function_coverage=1 00:17:54.867 --rc genhtml_legend=1 00:17:54.867 --rc geninfo_all_blocks=1 00:17:54.867 --rc geninfo_unexecuted_blocks=1 00:17:54.867 00:17:54.867 ' 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:54.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.867 --rc genhtml_branch_coverage=1 00:17:54.867 --rc genhtml_function_coverage=1 00:17:54.867 --rc genhtml_legend=1 00:17:54.867 --rc geninfo_all_blocks=1 00:17:54.867 --rc geninfo_unexecuted_blocks=1 00:17:54.867 00:17:54.867 ' 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:54.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.867 --rc genhtml_branch_coverage=1 00:17:54.867 --rc genhtml_function_coverage=1 00:17:54.867 --rc genhtml_legend=1 00:17:54.867 --rc geninfo_all_blocks=1 00:17:54.867 --rc geninfo_unexecuted_blocks=1 00:17:54.867 00:17:54.867 ' 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:54.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.867 --rc genhtml_branch_coverage=1 00:17:54.867 --rc genhtml_function_coverage=1 00:17:54.867 --rc genhtml_legend=1 00:17:54.867 --rc geninfo_all_blocks=1 00:17:54.867 --rc geninfo_unexecuted_blocks=1 00:17:54.867 00:17:54.867 ' 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.867 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:54.868 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:54.868 Cannot find device "nvmf_init_br" 00:17:54.868 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:17:54.869 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:54.869 Cannot find device "nvmf_init_br2" 00:17:54.869 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:17:54.869 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:55.151 Cannot find device "nvmf_tgt_br" 00:17:55.151 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:17:55.151 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:55.151 Cannot find device "nvmf_tgt_br2" 00:17:55.151 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:17:55.151 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:55.151 Cannot find device "nvmf_init_br" 00:17:55.151 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:17:55.151 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:55.151 Cannot find device "nvmf_init_br2" 00:17:55.151 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:17:55.151 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:55.151 Cannot find device "nvmf_tgt_br" 00:17:55.151 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:17:55.151 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:55.151 Cannot find device "nvmf_tgt_br2" 00:17:55.151 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:17:55.151 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:55.151 Cannot find device "nvmf_br" 00:17:55.151 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:17:55.151 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:55.151 Cannot find device "nvmf_init_if" 00:17:55.151 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:55.152 Cannot find device "nvmf_init_if2" 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:55.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:55.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:55.152 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:55.411 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:55.411 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:55.411 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:55.411 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:55.411 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:17:55.411 00:17:55.411 --- 10.0.0.3 ping statistics --- 00:17:55.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.411 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:55.411 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:55.411 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:17:55.411 00:17:55.411 --- 10.0.0.4 ping statistics --- 00:17:55.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.411 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:55.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:17:55.411 00:17:55.411 --- 10.0.0.1 ping statistics --- 00:17:55.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.411 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:55.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:17:55.411 00:17:55.411 --- 10.0.0.2 ping statistics --- 00:17:55.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.411 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=74327 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 74327 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 74327 ']' 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.411 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:55.411 [2024-12-10 11:21:02.188993] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:17:55.411 [2024-12-10 11:21:02.189174] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:55.670 [2024-12-10 11:21:02.412228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:55.928 [2024-12-10 11:21:02.586106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.928 [2024-12-10 11:21:02.586189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.928 [2024-12-10 11:21:02.586211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.928 [2024-12-10 11:21:02.586228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.928 [2024-12-10 11:21:02.586241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.928 [2024-12-10 11:21:02.588402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:55.928 [2024-12-10 11:21:02.588461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:55.928 [2024-12-10 11:21:02.588522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:55.928 [2024-12-10 11:21:02.588526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:55.928 [2024-12-10 11:21:02.750312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.496 [2024-12-10 11:21:03.183373] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.496 Malloc0 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.496 [2024-12-10 11:21:03.284205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:56.496 { 00:17:56.496 "params": { 00:17:56.496 "name": "Nvme$subsystem", 00:17:56.496 "trtype": "$TEST_TRANSPORT", 00:17:56.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:56.496 "adrfam": "ipv4", 00:17:56.496 "trsvcid": "$NVMF_PORT", 00:17:56.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:56.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:56.496 "hdgst": ${hdgst:-false}, 00:17:56.496 "ddgst": ${ddgst:-false} 00:17:56.496 }, 00:17:56.496 "method": "bdev_nvme_attach_controller" 00:17:56.496 } 00:17:56.496 EOF 00:17:56.496 )") 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:56.496 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:56.496 "params": { 00:17:56.496 "name": "Nvme1", 00:17:56.496 "trtype": "tcp", 00:17:56.496 "traddr": "10.0.0.3", 00:17:56.496 "adrfam": "ipv4", 00:17:56.496 "trsvcid": "4420", 00:17:56.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:56.496 "hdgst": false, 00:17:56.496 "ddgst": false 00:17:56.496 }, 00:17:56.496 "method": "bdev_nvme_attach_controller" 00:17:56.496 }' 00:17:56.755 [2024-12-10 11:21:03.442094] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:17:56.755 [2024-12-10 11:21:03.442280] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid74363 ] 00:17:57.013 [2024-12-10 11:21:03.654364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:57.013 [2024-12-10 11:21:03.794663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.013 [2024-12-10 11:21:03.794788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.013 [2024-12-10 11:21:03.794815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.271 [2024-12-10 11:21:03.958286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:57.530 I/O targets: 00:17:57.530 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:57.530 00:17:57.530 00:17:57.530 CUnit - A unit testing framework for C - Version 2.1-3 00:17:57.530 http://cunit.sourceforge.net/ 00:17:57.530 00:17:57.530 00:17:57.530 Suite: bdevio tests on: Nvme1n1 00:17:57.530 Test: blockdev write read block ...passed 00:17:57.530 Test: blockdev write zeroes read block ...passed 00:17:57.530 Test: blockdev write zeroes read no split ...passed 00:17:57.530 Test: blockdev write zeroes read split ...passed 00:17:57.530 Test: blockdev write zeroes read split partial ...passed 00:17:57.530 Test: blockdev reset ...[2024-12-10 11:21:04.328486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:57.530 [2024-12-10 11:21:04.328657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:17:57.530 [2024-12-10 11:21:04.341618] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:57.530 passed 00:17:57.530 Test: blockdev write read 8 blocks ...passed 00:17:57.530 Test: blockdev write read size > 128k ...passed 00:17:57.530 Test: blockdev write read invalid size ...passed 00:17:57.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:57.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:57.530 Test: blockdev write read max offset ...passed 00:17:57.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:57.530 Test: blockdev writev readv 8 blocks ...passed 00:17:57.530 Test: blockdev writev readv 30 x 1block ...passed 00:17:57.530 Test: blockdev writev readv block ...passed 00:17:57.530 Test: blockdev writev readv size > 128k ...passed 00:17:57.530 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:57.530 Test: blockdev comparev and writev ...[2024-12-10 11:21:04.353159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.530 [2024-12-10 11:21:04.353344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.530 [2024-12-10 11:21:04.353487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.530 [2024-12-10 11:21:04.353604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:57.530 [2024-12-10 11:21:04.354125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.530 [2024-12-10 11:21:04.354258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:57.530 [2024-12-10 11:21:04.354366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.530 [2024-12-10 11:21:04.354475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:57.530 [2024-12-10 11:21:04.354974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.530 [2024-12-10 11:21:04.355097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:57.530 [2024-12-10 11:21:04.355197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.530 [2024-12-10 11:21:04.355300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:57.790 [2024-12-10 11:21:04.355826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.790 [2024-12-10 11:21:04.355952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:57.790 [2024-12-10 11:21:04.356059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.790 [2024-12-10 11:21:04.356157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:57.790 passed 00:17:57.790 Test: blockdev nvme passthru rw ...passed 00:17:57.790 Test: blockdev nvme passthru vendor specific ...[2024-12-10 11:21:04.357329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:57.790 [2024-12-10 11:21:04.357488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:57.790 [2024-12-10 11:21:04.357752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:57.790 [2024-12-10 11:21:04.357879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:57.790 [2024-12-10 11:21:04.358143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:57.790 [2024-12-10 11:21:04.358265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:57.790 [2024-12-10 11:21:04.358527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:57.790 [2024-12-10 11:21:04.358652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:57.790 passed 00:17:57.790 Test: blockdev nvme admin passthru ...passed 00:17:57.790 Test: blockdev copy ...passed 00:17:57.790 00:17:57.790 Run Summary: Type Total Ran Passed Failed Inactive 00:17:57.790 suites 1 1 n/a 0 0 00:17:57.790 tests 23 23 23 0 0 00:17:57.790 asserts 152 152 152 0 n/a 00:17:57.790 00:17:57.790 Elapsed time = 0.245 seconds 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:58.355 rmmod nvme_tcp 00:17:58.355 rmmod nvme_fabrics 00:17:58.355 rmmod nvme_keyring 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 74327 ']' 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 74327 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 74327 ']' 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 74327 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:58.355 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.612 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74327 00:17:58.612 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:58.612 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:58.612 killing process with pid 74327 00:17:58.612 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74327' 00:17:58.612 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 74327 00:17:58.612 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 74327 00:17:59.546 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:59.546 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:59.546 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:59.546 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:59.546 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:59.546 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:59.546 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:59.546 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:17:59.547 00:17:59.547 real 0m4.826s 00:17:59.547 user 0m16.453s 00:17:59.547 sys 0m1.627s 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.547 ************************************ 00:17:59.547 END TEST nvmf_bdevio_no_huge 00:17:59.547 ************************************ 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:59.547 ************************************ 00:17:59.547 START TEST nvmf_tls 00:17:59.547 ************************************ 00:17:59.547 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:59.547 * Looking for test storage... 00:17:59.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.807 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:59.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.808 --rc genhtml_branch_coverage=1 00:17:59.808 --rc genhtml_function_coverage=1 00:17:59.808 --rc genhtml_legend=1 00:17:59.808 --rc geninfo_all_blocks=1 00:17:59.808 --rc geninfo_unexecuted_blocks=1 00:17:59.808 00:17:59.808 ' 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:59.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.808 --rc genhtml_branch_coverage=1 00:17:59.808 --rc genhtml_function_coverage=1 00:17:59.808 --rc genhtml_legend=1 00:17:59.808 --rc geninfo_all_blocks=1 00:17:59.808 --rc geninfo_unexecuted_blocks=1 00:17:59.808 00:17:59.808 ' 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:59.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.808 --rc genhtml_branch_coverage=1 00:17:59.808 --rc genhtml_function_coverage=1 00:17:59.808 --rc genhtml_legend=1 00:17:59.808 --rc geninfo_all_blocks=1 00:17:59.808 --rc geninfo_unexecuted_blocks=1 00:17:59.808 00:17:59.808 ' 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:59.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.808 --rc genhtml_branch_coverage=1 00:17:59.808 --rc genhtml_function_coverage=1 00:17:59.808 --rc genhtml_legend=1 00:17:59.808 --rc geninfo_all_blocks=1 00:17:59.808 --rc geninfo_unexecuted_blocks=1 00:17:59.808 00:17:59.808 ' 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:59.808 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:59.808 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:59.809 Cannot find device "nvmf_init_br" 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:59.809 Cannot find device "nvmf_init_br2" 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:59.809 Cannot find device "nvmf_tgt_br" 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:59.809 Cannot find device "nvmf_tgt_br2" 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:59.809 Cannot find device "nvmf_init_br" 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:59.809 Cannot find device "nvmf_init_br2" 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:59.809 Cannot find device "nvmf_tgt_br" 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:59.809 Cannot find device "nvmf_tgt_br2" 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:59.809 Cannot find device "nvmf_br" 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:59.809 Cannot find device "nvmf_init_if" 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:59.809 Cannot find device "nvmf_init_if2" 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:17:59.809 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:00.081 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:00.081 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:18:00.081 00:18:00.081 --- 10.0.0.3 ping statistics --- 00:18:00.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.081 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:00.081 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:00.081 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:18:00.081 00:18:00.081 --- 10.0.0.4 ping statistics --- 00:18:00.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.081 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:00.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:00.081 00:18:00.081 --- 10.0.0.1 ping statistics --- 00:18:00.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.081 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:00.081 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:00.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:18:00.081 00:18:00.081 --- 10.0.0.2 ping statistics --- 00:18:00.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.081 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74643 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74643 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74643 ']' 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.345 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.345 [2024-12-10 11:21:07.056701] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:00.345 [2024-12-10 11:21:07.056870] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.603 [2024-12-10 11:21:07.247903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.603 [2024-12-10 11:21:07.374914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.603 [2024-12-10 11:21:07.375000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.603 [2024-12-10 11:21:07.375026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.603 [2024-12-10 11:21:07.375054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.603 [2024-12-10 11:21:07.375071] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.603 [2024-12-10 11:21:07.376551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.538 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.538 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:01.538 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:01.538 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.538 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.538 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.538 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:01.538 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:01.796 true 00:18:01.796 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:01.796 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:02.054 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:02.054 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:02.054 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:02.621 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:02.621 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:02.879 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:02.879 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:02.879 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:03.138 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:03.138 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:03.396 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:03.396 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:03.396 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:03.396 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:03.654 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:03.654 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:03.654 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:03.913 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:03.913 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:04.171 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:04.171 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:04.171 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:04.428 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:04.428 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:04.698 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:04.698 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:04.698 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.AgdskNEoHq 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.XPjE3g3Nou 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.AgdskNEoHq 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.XPjE3g3Nou 00:18:04.986 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:05.244 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:05.810 [2024-12-10 11:21:12.402630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:05.810 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.AgdskNEoHq 00:18:05.811 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.AgdskNEoHq 00:18:05.811 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:06.069 [2024-12-10 11:21:12.834369] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.069 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:06.636 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:06.636 [2024-12-10 11:21:13.418613] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:06.636 [2024-12-10 11:21:13.418964] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:06.636 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:06.895 malloc0 00:18:07.154 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:07.412 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.AgdskNEoHq 00:18:07.669 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:07.928 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.AgdskNEoHq 00:18:20.181 Initializing NVMe Controllers 00:18:20.181 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:20.181 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:20.181 Initialization complete. Launching workers. 00:18:20.181 ======================================================== 00:18:20.181 Latency(us) 00:18:20.181 Device Information : IOPS MiB/s Average min max 00:18:20.181 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6445.47 25.18 9933.87 2285.96 17197.37 00:18:20.181 ======================================================== 00:18:20.181 Total : 6445.47 25.18 9933.87 2285.96 17197.37 00:18:20.181 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AgdskNEoHq 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.AgdskNEoHq 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74889 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74889 /var/tmp/bdevperf.sock 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74889 ']' 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.181 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.181 [2024-12-10 11:21:25.096530] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:20.181 [2024-12-10 11:21:25.096693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74889 ] 00:18:20.181 [2024-12-10 11:21:25.290115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.181 [2024-12-10 11:21:25.413967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.181 [2024-12-10 11:21:25.627906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:20.181 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.181 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:20.181 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AgdskNEoHq 00:18:20.181 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:20.181 [2024-12-10 11:21:26.729078] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.181 TLSTESTn1 00:18:20.181 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:20.181 Running I/O for 10 seconds... 00:18:22.490 2771.00 IOPS, 10.82 MiB/s [2024-12-10T11:21:30.252Z] 2781.00 IOPS, 10.86 MiB/s [2024-12-10T11:21:31.185Z] 2811.67 IOPS, 10.98 MiB/s [2024-12-10T11:21:32.126Z] 2817.00 IOPS, 11.00 MiB/s [2024-12-10T11:21:33.060Z] 2810.60 IOPS, 10.98 MiB/s [2024-12-10T11:21:33.994Z] 2814.17 IOPS, 10.99 MiB/s [2024-12-10T11:21:35.381Z] 2813.14 IOPS, 10.99 MiB/s [2024-12-10T11:21:36.327Z] 2808.12 IOPS, 10.97 MiB/s [2024-12-10T11:21:37.263Z] 2809.67 IOPS, 10.98 MiB/s [2024-12-10T11:21:37.263Z] 2815.90 IOPS, 11.00 MiB/s 00:18:30.437 Latency(us) 00:18:30.437 [2024-12-10T11:21:37.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.437 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:30.437 Verification LBA range: start 0x0 length 0x2000 00:18:30.437 TLSTESTn1 : 10.03 2821.11 11.02 0.00 0.00 45264.69 9472.93 48854.11 00:18:30.437 [2024-12-10T11:21:37.263Z] =================================================================================================================== 00:18:30.438 [2024-12-10T11:21:37.264Z] Total : 2821.11 11.02 0.00 0.00 45264.69 9472.93 48854.11 00:18:30.438 { 00:18:30.438 "results": [ 00:18:30.438 { 00:18:30.438 "job": "TLSTESTn1", 00:18:30.438 "core_mask": "0x4", 00:18:30.438 "workload": "verify", 00:18:30.438 "status": "finished", 00:18:30.438 "verify_range": { 00:18:30.438 "start": 0, 00:18:30.438 "length": 8192 00:18:30.438 }, 00:18:30.438 "queue_depth": 128, 00:18:30.438 "io_size": 4096, 00:18:30.438 "runtime": 10.026916, 00:18:30.438 "iops": 2821.106709181567, 00:18:30.438 "mibps": 11.019948082740497, 00:18:30.438 "io_failed": 0, 00:18:30.438 "io_timeout": 0, 00:18:30.438 "avg_latency_us": 45264.68949989876, 00:18:30.438 "min_latency_us": 9472.930909090908, 00:18:30.438 "max_latency_us": 48854.10909090909 00:18:30.438 } 00:18:30.438 ], 00:18:30.438 "core_count": 1 00:18:30.438 } 00:18:30.438 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:30.438 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 74889 00:18:30.438 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74889 ']' 00:18:30.438 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74889 00:18:30.438 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.438 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.438 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74889 00:18:30.438 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:30.438 killing process with pid 74889 00:18:30.438 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:30.438 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74889' 00:18:30.438 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.438 00:18:30.438 Latency(us) 00:18:30.438 [2024-12-10T11:21:37.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.438 [2024-12-10T11:21:37.264Z] =================================================================================================================== 00:18:30.438 [2024-12-10T11:21:37.264Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.438 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74889 00:18:30.438 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74889 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XPjE3g3Nou 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XPjE3g3Nou 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XPjE3g3Nou 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XPjE3g3Nou 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75037 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75037 /var/tmp/bdevperf.sock 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75037 ']' 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.374 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.374 [2024-12-10 11:21:38.192113] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:31.374 [2024-12-10 11:21:38.192286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75037 ] 00:18:31.633 [2024-12-10 11:21:38.384787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.891 [2024-12-10 11:21:38.515582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.149 [2024-12-10 11:21:38.727505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:32.445 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.445 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:32.445 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XPjE3g3Nou 00:18:32.733 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:33.299 [2024-12-10 11:21:39.823802] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.299 [2024-12-10 11:21:39.833503] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:33.299 [2024-12-10 11:21:39.834304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:18:33.299 [2024-12-10 11:21:39.835276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:18:33.299 [2024-12-10 11:21:39.836264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:33.299 [2024-12-10 11:21:39.836310] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:33.300 [2024-12-10 11:21:39.836329] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:33.300 [2024-12-10 11:21:39.836383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:33.300 request: 00:18:33.300 { 00:18:33.300 "name": "TLSTEST", 00:18:33.300 "trtype": "tcp", 00:18:33.300 "traddr": "10.0.0.3", 00:18:33.300 "adrfam": "ipv4", 00:18:33.300 "trsvcid": "4420", 00:18:33.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.300 "prchk_reftag": false, 00:18:33.300 "prchk_guard": false, 00:18:33.300 "hdgst": false, 00:18:33.300 "ddgst": false, 00:18:33.300 "psk": "key0", 00:18:33.300 "allow_unrecognized_csi": false, 00:18:33.300 "method": "bdev_nvme_attach_controller", 00:18:33.300 "req_id": 1 00:18:33.300 } 00:18:33.300 Got JSON-RPC error response 00:18:33.300 response: 00:18:33.300 { 00:18:33.300 "code": -5, 00:18:33.300 "message": "Input/output error" 00:18:33.300 } 00:18:33.300 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 75037 00:18:33.300 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75037 ']' 00:18:33.300 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75037 00:18:33.300 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:33.300 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.300 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75037 00:18:33.300 killing process with pid 75037 00:18:33.300 Received shutdown signal, test time was about 10.000000 seconds 00:18:33.300 00:18:33.300 Latency(us) 00:18:33.300 [2024-12-10T11:21:40.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.300 [2024-12-10T11:21:40.126Z] =================================================================================================================== 00:18:33.300 [2024-12-10T11:21:40.126Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:33.300 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:33.300 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:33.300 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75037' 00:18:33.300 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75037 00:18:33.300 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75037 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.AgdskNEoHq 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.AgdskNEoHq 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:34.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.AgdskNEoHq 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.AgdskNEoHq 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75078 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75078 /var/tmp/bdevperf.sock 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75078 ']' 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.235 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.235 [2024-12-10 11:21:40.913624] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:34.235 [2024-12-10 11:21:40.914206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75078 ] 00:18:34.493 [2024-12-10 11:21:41.083135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.493 [2024-12-10 11:21:41.186524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.752 [2024-12-10 11:21:41.368092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:35.318 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.318 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:35.318 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AgdskNEoHq 00:18:35.576 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:35.835 [2024-12-10 11:21:42.501972] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.835 [2024-12-10 11:21:42.513295] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:35.835 [2024-12-10 11:21:42.513362] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:35.835 [2024-12-10 11:21:42.513439] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:35.835 [2024-12-10 11:21:42.514180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:18:35.835 [2024-12-10 11:21:42.515155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:18:35.835 [2024-12-10 11:21:42.516140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:35.835 [2024-12-10 11:21:42.516186] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:35.835 [2024-12-10 11:21:42.516205] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:35.835 [2024-12-10 11:21:42.516226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:35.835 request: 00:18:35.835 { 00:18:35.835 "name": "TLSTEST", 00:18:35.835 "trtype": "tcp", 00:18:35.835 "traddr": "10.0.0.3", 00:18:35.835 "adrfam": "ipv4", 00:18:35.835 "trsvcid": "4420", 00:18:35.835 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.835 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:35.835 "prchk_reftag": false, 00:18:35.835 "prchk_guard": false, 00:18:35.835 "hdgst": false, 00:18:35.835 "ddgst": false, 00:18:35.835 "psk": "key0", 00:18:35.835 "allow_unrecognized_csi": false, 00:18:35.835 "method": "bdev_nvme_attach_controller", 00:18:35.835 "req_id": 1 00:18:35.835 } 00:18:35.835 Got JSON-RPC error response 00:18:35.835 response: 00:18:35.835 { 00:18:35.835 "code": -5, 00:18:35.835 "message": "Input/output error" 00:18:35.835 } 00:18:35.835 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 75078 00:18:35.835 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75078 ']' 00:18:35.835 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75078 00:18:35.835 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:35.835 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.836 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75078 00:18:35.836 killing process with pid 75078 00:18:35.836 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.836 00:18:35.836 Latency(us) 00:18:35.836 [2024-12-10T11:21:42.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.836 [2024-12-10T11:21:42.662Z] =================================================================================================================== 00:18:35.836 [2024-12-10T11:21:42.662Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:35.836 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:35.836 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:35.836 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75078' 00:18:35.836 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75078 00:18:35.836 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75078 00:18:36.770 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:36.770 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:36.770 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:36.770 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:36.770 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:36.770 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.AgdskNEoHq 00:18:36.770 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:36.770 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.AgdskNEoHq 00:18:36.770 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:36.770 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.770 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:36.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.770 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.AgdskNEoHq 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.AgdskNEoHq 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75119 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75119 /var/tmp/bdevperf.sock 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75119 ']' 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.771 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.029 [2024-12-10 11:21:43.649514] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:37.029 [2024-12-10 11:21:43.649655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75119 ] 00:18:37.029 [2024-12-10 11:21:43.827915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.328 [2024-12-10 11:21:43.932259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.328 [2024-12-10 11:21:44.110433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:37.902 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.902 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:37.902 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AgdskNEoHq 00:18:38.161 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:38.421 [2024-12-10 11:21:45.218235] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.421 [2024-12-10 11:21:45.227209] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:38.421 [2024-12-10 11:21:45.227260] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:38.421 [2024-12-10 11:21:45.227325] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:38.421 [2024-12-10 11:21:45.227493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:18:38.421 [2024-12-10 11:21:45.228449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:18:38.421 [2024-12-10 11:21:45.229446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:38.421 [2024-12-10 11:21:45.229490] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:38.421 [2024-12-10 11:21:45.229514] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:38.421 [2024-12-10 11:21:45.229535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:38.421 request: 00:18:38.421 { 00:18:38.421 "name": "TLSTEST", 00:18:38.421 "trtype": "tcp", 00:18:38.421 "traddr": "10.0.0.3", 00:18:38.421 "adrfam": "ipv4", 00:18:38.421 "trsvcid": "4420", 00:18:38.421 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:38.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.421 "prchk_reftag": false, 00:18:38.421 "prchk_guard": false, 00:18:38.421 "hdgst": false, 00:18:38.421 "ddgst": false, 00:18:38.421 "psk": "key0", 00:18:38.421 "allow_unrecognized_csi": false, 00:18:38.421 "method": "bdev_nvme_attach_controller", 00:18:38.421 "req_id": 1 00:18:38.421 } 00:18:38.421 Got JSON-RPC error response 00:18:38.421 response: 00:18:38.421 { 00:18:38.421 "code": -5, 00:18:38.421 "message": "Input/output error" 00:18:38.421 } 00:18:38.685 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 75119 00:18:38.685 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75119 ']' 00:18:38.685 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75119 00:18:38.685 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:38.685 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.685 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75119 00:18:38.685 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:38.685 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:38.685 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75119' 00:18:38.685 killing process with pid 75119 00:18:38.685 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75119 00:18:38.685 Received shutdown signal, test time was about 10.000000 seconds 00:18:38.685 00:18:38.685 Latency(us) 00:18:38.685 [2024-12-10T11:21:45.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.685 [2024-12-10T11:21:45.511Z] =================================================================================================================== 00:18:38.685 [2024-12-10T11:21:45.511Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:38.685 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75119 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75158 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75158 /var/tmp/bdevperf.sock 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75158 ']' 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.621 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.621 [2024-12-10 11:21:46.336802] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:39.621 [2024-12-10 11:21:46.336969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75158 ] 00:18:39.880 [2024-12-10 11:21:46.524816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.880 [2024-12-10 11:21:46.651736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.139 [2024-12-10 11:21:46.868244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:40.705 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.706 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:40.706 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:40.964 [2024-12-10 11:21:47.634826] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:40.965 [2024-12-10 11:21:47.634913] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:40.965 request: 00:18:40.965 { 00:18:40.965 "name": "key0", 00:18:40.965 "path": "", 00:18:40.965 "method": "keyring_file_add_key", 00:18:40.965 "req_id": 1 00:18:40.965 } 00:18:40.965 Got JSON-RPC error response 00:18:40.965 response: 00:18:40.965 { 00:18:40.965 "code": -1, 00:18:40.965 "message": "Operation not permitted" 00:18:40.965 } 00:18:40.965 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:41.223 [2024-12-10 11:21:47.907053] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.223 [2024-12-10 11:21:47.907422] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:41.223 request: 00:18:41.223 { 00:18:41.223 "name": "TLSTEST", 00:18:41.223 "trtype": "tcp", 00:18:41.223 "traddr": "10.0.0.3", 00:18:41.223 "adrfam": "ipv4", 00:18:41.223 "trsvcid": "4420", 00:18:41.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.223 "prchk_reftag": false, 00:18:41.223 "prchk_guard": false, 00:18:41.223 "hdgst": false, 00:18:41.223 "ddgst": false, 00:18:41.223 "psk": "key0", 00:18:41.223 "allow_unrecognized_csi": false, 00:18:41.223 "method": "bdev_nvme_attach_controller", 00:18:41.223 "req_id": 1 00:18:41.223 } 00:18:41.223 Got JSON-RPC error response 00:18:41.223 response: 00:18:41.223 { 00:18:41.223 "code": -126, 00:18:41.223 "message": "Required key not available" 00:18:41.223 } 00:18:41.223 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 75158 00:18:41.223 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75158 ']' 00:18:41.223 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75158 00:18:41.223 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:41.223 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.223 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75158 00:18:41.223 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:41.223 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:41.223 killing process with pid 75158 00:18:41.224 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.224 00:18:41.224 Latency(us) 00:18:41.224 [2024-12-10T11:21:48.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.224 [2024-12-10T11:21:48.050Z] =================================================================================================================== 00:18:41.224 [2024-12-10T11:21:48.050Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:41.224 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75158' 00:18:41.224 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75158 00:18:41.224 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75158 00:18:42.200 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:42.200 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:42.200 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:42.200 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:42.200 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:42.200 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 74643 00:18:42.200 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74643 ']' 00:18:42.200 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74643 00:18:42.200 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:42.200 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.200 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74643 00:18:42.201 killing process with pid 74643 00:18:42.201 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:42.201 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:42.201 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74643' 00:18:42.201 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74643 00:18:42.201 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74643 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.LB5BaTl8XS 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.LB5BaTl8XS 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75224 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75224 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75224 ']' 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.577 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.577 [2024-12-10 11:21:50.394604] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:43.577 [2024-12-10 11:21:50.394801] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.836 [2024-12-10 11:21:50.581343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.095 [2024-12-10 11:21:50.729888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.095 [2024-12-10 11:21:50.729993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.095 [2024-12-10 11:21:50.730025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.095 [2024-12-10 11:21:50.730060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.095 [2024-12-10 11:21:50.730085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.095 [2024-12-10 11:21:50.731785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.354 [2024-12-10 11:21:50.922272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:44.612 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.612 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:44.612 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:44.612 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:44.612 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.871 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.871 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.LB5BaTl8XS 00:18:44.871 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.LB5BaTl8XS 00:18:44.871 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:45.129 [2024-12-10 11:21:51.729643] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.129 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:45.387 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:45.646 [2024-12-10 11:21:52.277884] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:45.646 [2024-12-10 11:21:52.278398] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:45.646 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:45.927 malloc0 00:18:45.927 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:46.185 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.LB5BaTl8XS 00:18:46.443 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:46.701 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LB5BaTl8XS 00:18:46.701 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:46.701 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:46.701 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:46.701 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LB5BaTl8XS 00:18:46.701 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:46.701 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:46.701 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75285 00:18:46.701 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:46.702 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75285 /var/tmp/bdevperf.sock 00:18:46.702 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75285 ']' 00:18:46.702 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.702 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.702 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.702 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.702 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.702 [2024-12-10 11:21:53.499129] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:46.702 [2024-12-10 11:21:53.499503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75285 ] 00:18:46.959 [2024-12-10 11:21:53.677987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.218 [2024-12-10 11:21:53.802878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.218 [2024-12-10 11:21:53.989160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:47.785 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.785 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:47.785 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LB5BaTl8XS 00:18:48.043 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:48.302 [2024-12-10 11:21:55.059208] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:48.560 TLSTESTn1 00:18:48.560 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:48.560 Running I/O for 10 seconds... 00:18:50.871 2785.00 IOPS, 10.88 MiB/s [2024-12-10T11:21:58.633Z] 2816.00 IOPS, 11.00 MiB/s [2024-12-10T11:21:59.568Z] 2838.00 IOPS, 11.09 MiB/s [2024-12-10T11:22:00.503Z] 2864.00 IOPS, 11.19 MiB/s [2024-12-10T11:22:01.438Z] 2878.40 IOPS, 11.24 MiB/s [2024-12-10T11:22:02.422Z] 2886.17 IOPS, 11.27 MiB/s [2024-12-10T11:22:03.358Z] 2891.71 IOPS, 11.30 MiB/s [2024-12-10T11:22:04.735Z] 2894.62 IOPS, 11.31 MiB/s [2024-12-10T11:22:05.669Z] 2896.33 IOPS, 11.31 MiB/s [2024-12-10T11:22:05.669Z] 2896.20 IOPS, 11.31 MiB/s 00:18:58.843 Latency(us) 00:18:58.843 [2024-12-10T11:22:05.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.843 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:58.843 Verification LBA range: start 0x0 length 0x2000 00:18:58.843 TLSTESTn1 : 10.02 2901.47 11.33 0.00 0.00 44025.64 9115.46 34793.66 00:18:58.843 [2024-12-10T11:22:05.669Z] =================================================================================================================== 00:18:58.843 [2024-12-10T11:22:05.669Z] Total : 2901.47 11.33 0.00 0.00 44025.64 9115.46 34793.66 00:18:58.843 { 00:18:58.843 "results": [ 00:18:58.843 { 00:18:58.843 "job": "TLSTESTn1", 00:18:58.843 "core_mask": "0x4", 00:18:58.843 "workload": "verify", 00:18:58.843 "status": "finished", 00:18:58.843 "verify_range": { 00:18:58.843 "start": 0, 00:18:58.843 "length": 8192 00:18:58.843 }, 00:18:58.844 "queue_depth": 128, 00:18:58.844 "io_size": 4096, 00:18:58.844 "runtime": 10.024926, 00:18:58.844 "iops": 2901.4678013583343, 00:18:58.844 "mibps": 11.333858599055993, 00:18:58.844 "io_failed": 0, 00:18:58.844 "io_timeout": 0, 00:18:58.844 "avg_latency_us": 44025.64125754398, 00:18:58.844 "min_latency_us": 9115.461818181819, 00:18:58.844 "max_latency_us": 34793.65818181818 00:18:58.844 } 00:18:58.844 ], 00:18:58.844 "core_count": 1 00:18:58.844 } 00:18:58.844 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:58.844 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 75285 00:18:58.844 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75285 ']' 00:18:58.844 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75285 00:18:58.844 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:58.844 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.844 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75285 00:18:58.844 killing process with pid 75285 00:18:58.844 Received shutdown signal, test time was about 10.000000 seconds 00:18:58.844 00:18:58.844 Latency(us) 00:18:58.844 [2024-12-10T11:22:05.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.844 [2024-12-10T11:22:05.670Z] =================================================================================================================== 00:18:58.844 [2024-12-10T11:22:05.670Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.844 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:58.844 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:58.844 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75285' 00:18:58.844 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75285 00:18:58.844 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75285 00:18:59.781 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.LB5BaTl8XS 00:18:59.781 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LB5BaTl8XS 00:18:59.781 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:59.781 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LB5BaTl8XS 00:18:59.781 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:59.781 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LB5BaTl8XS 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LB5BaTl8XS 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75428 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75428 /var/tmp/bdevperf.sock 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75428 ']' 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.782 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.782 [2024-12-10 11:22:06.478985] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:18:59.782 [2024-12-10 11:22:06.479406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75428 ] 00:19:00.043 [2024-12-10 11:22:06.688591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.043 [2024-12-10 11:22:06.795522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.301 [2024-12-10 11:22:06.984246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:00.868 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.868 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:00.868 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LB5BaTl8XS 00:19:01.126 [2024-12-10 11:22:07.796123] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.LB5BaTl8XS': 0100666 00:19:01.126 [2024-12-10 11:22:07.796192] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:01.126 request: 00:19:01.126 { 00:19:01.126 "name": "key0", 00:19:01.126 "path": "/tmp/tmp.LB5BaTl8XS", 00:19:01.126 "method": "keyring_file_add_key", 00:19:01.126 "req_id": 1 00:19:01.126 } 00:19:01.126 Got JSON-RPC error response 00:19:01.126 response: 00:19:01.126 { 00:19:01.126 "code": -1, 00:19:01.126 "message": "Operation not permitted" 00:19:01.126 } 00:19:01.126 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:01.391 [2024-12-10 11:22:08.104380] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:01.391 [2024-12-10 11:22:08.104494] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:01.391 request: 00:19:01.391 { 00:19:01.391 "name": "TLSTEST", 00:19:01.391 "trtype": "tcp", 00:19:01.391 "traddr": "10.0.0.3", 00:19:01.391 "adrfam": "ipv4", 00:19:01.391 "trsvcid": "4420", 00:19:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.391 "prchk_reftag": false, 00:19:01.391 "prchk_guard": false, 00:19:01.391 "hdgst": false, 00:19:01.391 "ddgst": false, 00:19:01.391 "psk": "key0", 00:19:01.391 "allow_unrecognized_csi": false, 00:19:01.391 "method": "bdev_nvme_attach_controller", 00:19:01.391 "req_id": 1 00:19:01.391 } 00:19:01.391 Got JSON-RPC error response 00:19:01.391 response: 00:19:01.391 { 00:19:01.391 "code": -126, 00:19:01.391 "message": "Required key not available" 00:19:01.391 } 00:19:01.391 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 75428 00:19:01.391 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75428 ']' 00:19:01.391 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75428 00:19:01.391 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:01.391 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.391 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75428 00:19:01.391 killing process with pid 75428 00:19:01.391 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.392 00:19:01.392 Latency(us) 00:19:01.392 [2024-12-10T11:22:08.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.392 [2024-12-10T11:22:08.218Z] =================================================================================================================== 00:19:01.392 [2024-12-10T11:22:08.218Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:01.392 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:01.392 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:01.392 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75428' 00:19:01.392 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75428 00:19:01.392 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75428 00:19:02.336 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:02.336 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:02.336 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.336 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.336 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.336 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 75224 00:19:02.336 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75224 ']' 00:19:02.336 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75224 00:19:02.336 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:02.336 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.336 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75224 00:19:02.594 killing process with pid 75224 00:19:02.594 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:02.594 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:02.594 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75224' 00:19:02.594 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75224 00:19:02.594 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75224 00:19:03.531 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:03.531 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:03.531 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.531 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.531 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75491 00:19:03.531 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:03.531 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75491 00:19:03.531 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75491 ']' 00:19:03.531 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.531 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.531 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.531 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.531 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.790 [2024-12-10 11:22:10.489253] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:03.790 [2024-12-10 11:22:10.489489] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.049 [2024-12-10 11:22:10.676217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.049 [2024-12-10 11:22:10.781808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.049 [2024-12-10 11:22:10.781897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.049 [2024-12-10 11:22:10.781919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.049 [2024-12-10 11:22:10.781943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.049 [2024-12-10 11:22:10.781964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.049 [2024-12-10 11:22:10.783221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.308 [2024-12-10 11:22:10.972948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:04.875 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.875 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:04.875 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:04.875 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:04.875 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.875 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.875 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.LB5BaTl8XS 00:19:04.875 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:04.875 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.LB5BaTl8XS 00:19:04.875 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:04.876 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:04.876 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:04.876 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:04.876 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.LB5BaTl8XS 00:19:04.876 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.LB5BaTl8XS 00:19:04.876 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:05.134 [2024-12-10 11:22:11.783352] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.134 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:05.392 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:05.651 [2024-12-10 11:22:12.359562] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:05.651 [2024-12-10 11:22:12.359948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:05.651 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:05.909 malloc0 00:19:05.909 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:06.168 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.LB5BaTl8XS 00:19:06.441 [2024-12-10 11:22:13.180822] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.LB5BaTl8XS': 0100666 00:19:06.441 [2024-12-10 11:22:13.180907] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:06.442 request: 00:19:06.442 { 00:19:06.442 "name": "key0", 00:19:06.442 "path": "/tmp/tmp.LB5BaTl8XS", 00:19:06.442 "method": "keyring_file_add_key", 00:19:06.442 "req_id": 1 00:19:06.442 } 00:19:06.442 Got JSON-RPC error response 00:19:06.442 response: 00:19:06.442 { 00:19:06.442 "code": -1, 00:19:06.442 "message": "Operation not permitted" 00:19:06.442 } 00:19:06.442 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:06.757 [2024-12-10 11:22:13.484948] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:06.757 [2024-12-10 11:22:13.485061] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:06.757 request: 00:19:06.757 { 00:19:06.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.757 "host": "nqn.2016-06.io.spdk:host1", 00:19:06.757 "psk": "key0", 00:19:06.757 "method": "nvmf_subsystem_add_host", 00:19:06.757 "req_id": 1 00:19:06.757 } 00:19:06.757 Got JSON-RPC error response 00:19:06.757 response: 00:19:06.757 { 00:19:06.757 "code": -32603, 00:19:06.757 "message": "Internal error" 00:19:06.757 } 00:19:06.757 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:06.757 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:06.757 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:06.757 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:06.757 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 75491 00:19:06.757 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75491 ']' 00:19:06.757 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75491 00:19:06.757 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:06.757 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.757 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75491 00:19:06.757 killing process with pid 75491 00:19:06.757 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:06.757 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:06.757 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75491' 00:19:06.757 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75491 00:19:06.757 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75491 00:19:08.132 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.LB5BaTl8XS 00:19:08.132 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:08.132 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:08.132 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.132 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.132 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75567 00:19:08.132 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75567 00:19:08.132 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:08.132 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75567 ']' 00:19:08.132 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.132 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.132 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.132 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.132 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.132 [2024-12-10 11:22:14.774976] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:08.132 [2024-12-10 11:22:14.775127] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.132 [2024-12-10 11:22:14.948434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.395 [2024-12-10 11:22:15.053540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.395 [2024-12-10 11:22:15.053855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.395 [2024-12-10 11:22:15.053959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.395 [2024-12-10 11:22:15.054072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.395 [2024-12-10 11:22:15.054160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.395 [2024-12-10 11:22:15.055442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.651 [2024-12-10 11:22:15.244609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:09.217 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.217 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:09.217 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:09.217 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.217 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.217 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.217 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.LB5BaTl8XS 00:19:09.217 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.LB5BaTl8XS 00:19:09.217 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:09.475 [2024-12-10 11:22:16.109764] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.475 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:09.733 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:09.990 [2024-12-10 11:22:16.734010] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:09.990 [2024-12-10 11:22:16.734563] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:09.990 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:10.248 malloc0 00:19:10.248 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:10.506 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.LB5BaTl8XS 00:19:10.765 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:11.024 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:11.024 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=75629 00:19:11.024 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:11.024 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 75629 /var/tmp/bdevperf.sock 00:19:11.024 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75629 ']' 00:19:11.024 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.024 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.024 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.024 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.024 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.282 [2024-12-10 11:22:17.914155] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:11.282 [2024-12-10 11:22:17.914938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75629 ] 00:19:11.282 [2024-12-10 11:22:18.088180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.540 [2024-12-10 11:22:18.192394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.837 [2024-12-10 11:22:18.372273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:12.411 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.411 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:12.411 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LB5BaTl8XS 00:19:12.669 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.928 [2024-12-10 11:22:19.553503] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:12.928 TLSTESTn1 00:19:12.928 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:13.495 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:13.495 "subsystems": [ 00:19:13.495 { 00:19:13.495 "subsystem": "keyring", 00:19:13.495 "config": [ 00:19:13.495 { 00:19:13.495 "method": "keyring_file_add_key", 00:19:13.495 "params": { 00:19:13.495 "name": "key0", 00:19:13.495 "path": "/tmp/tmp.LB5BaTl8XS" 00:19:13.495 } 00:19:13.495 } 00:19:13.495 ] 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "subsystem": "iobuf", 00:19:13.495 "config": [ 00:19:13.495 { 00:19:13.495 "method": "iobuf_set_options", 00:19:13.495 "params": { 00:19:13.495 "small_pool_count": 8192, 00:19:13.495 "large_pool_count": 1024, 00:19:13.495 "small_bufsize": 8192, 00:19:13.495 "large_bufsize": 135168, 00:19:13.495 "enable_numa": false 00:19:13.495 } 00:19:13.495 } 00:19:13.495 ] 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "subsystem": "sock", 00:19:13.495 "config": [ 00:19:13.495 { 00:19:13.495 "method": "sock_set_default_impl", 00:19:13.495 "params": { 00:19:13.495 "impl_name": "uring" 00:19:13.495 } 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "method": "sock_impl_set_options", 00:19:13.495 "params": { 00:19:13.495 "impl_name": "ssl", 00:19:13.495 "recv_buf_size": 4096, 00:19:13.495 "send_buf_size": 4096, 00:19:13.495 "enable_recv_pipe": true, 00:19:13.495 "enable_quickack": false, 00:19:13.495 "enable_placement_id": 0, 00:19:13.495 "enable_zerocopy_send_server": true, 00:19:13.495 "enable_zerocopy_send_client": false, 00:19:13.495 "zerocopy_threshold": 0, 00:19:13.495 "tls_version": 0, 00:19:13.495 "enable_ktls": false 00:19:13.495 } 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "method": "sock_impl_set_options", 00:19:13.495 "params": { 00:19:13.495 "impl_name": "posix", 00:19:13.495 "recv_buf_size": 2097152, 00:19:13.495 "send_buf_size": 2097152, 00:19:13.495 "enable_recv_pipe": true, 00:19:13.495 "enable_quickack": false, 00:19:13.495 "enable_placement_id": 0, 00:19:13.495 "enable_zerocopy_send_server": true, 00:19:13.495 "enable_zerocopy_send_client": false, 00:19:13.495 "zerocopy_threshold": 0, 00:19:13.495 "tls_version": 0, 00:19:13.495 "enable_ktls": false 00:19:13.495 } 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "method": "sock_impl_set_options", 00:19:13.495 "params": { 00:19:13.495 "impl_name": "uring", 00:19:13.495 "recv_buf_size": 2097152, 00:19:13.495 "send_buf_size": 2097152, 00:19:13.495 "enable_recv_pipe": true, 00:19:13.495 "enable_quickack": false, 00:19:13.495 "enable_placement_id": 0, 00:19:13.495 "enable_zerocopy_send_server": false, 00:19:13.495 "enable_zerocopy_send_client": false, 00:19:13.495 "zerocopy_threshold": 0, 00:19:13.495 "tls_version": 0, 00:19:13.495 "enable_ktls": false 00:19:13.495 } 00:19:13.495 } 00:19:13.495 ] 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "subsystem": "vmd", 00:19:13.495 "config": [] 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "subsystem": "accel", 00:19:13.495 "config": [ 00:19:13.495 { 00:19:13.495 "method": "accel_set_options", 00:19:13.495 "params": { 00:19:13.495 "small_cache_size": 128, 00:19:13.495 "large_cache_size": 16, 00:19:13.495 "task_count": 2048, 00:19:13.495 "sequence_count": 2048, 00:19:13.495 "buf_count": 2048 00:19:13.495 } 00:19:13.495 } 00:19:13.495 ] 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "subsystem": "bdev", 00:19:13.495 "config": [ 00:19:13.495 { 00:19:13.495 "method": "bdev_set_options", 00:19:13.495 "params": { 00:19:13.495 "bdev_io_pool_size": 65535, 00:19:13.495 "bdev_io_cache_size": 256, 00:19:13.495 "bdev_auto_examine": true, 00:19:13.495 "iobuf_small_cache_size": 128, 00:19:13.495 "iobuf_large_cache_size": 16 00:19:13.495 } 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "method": "bdev_raid_set_options", 00:19:13.495 "params": { 00:19:13.495 "process_window_size_kb": 1024, 00:19:13.495 "process_max_bandwidth_mb_sec": 0 00:19:13.495 } 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "method": "bdev_iscsi_set_options", 00:19:13.495 "params": { 00:19:13.495 "timeout_sec": 30 00:19:13.495 } 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "method": "bdev_nvme_set_options", 00:19:13.495 "params": { 00:19:13.495 "action_on_timeout": "none", 00:19:13.495 "timeout_us": 0, 00:19:13.495 "timeout_admin_us": 0, 00:19:13.495 "keep_alive_timeout_ms": 10000, 00:19:13.495 "arbitration_burst": 0, 00:19:13.495 "low_priority_weight": 0, 00:19:13.495 "medium_priority_weight": 0, 00:19:13.495 "high_priority_weight": 0, 00:19:13.495 "nvme_adminq_poll_period_us": 10000, 00:19:13.495 "nvme_ioq_poll_period_us": 0, 00:19:13.495 "io_queue_requests": 0, 00:19:13.495 "delay_cmd_submit": true, 00:19:13.495 "transport_retry_count": 4, 00:19:13.495 "bdev_retry_count": 3, 00:19:13.495 "transport_ack_timeout": 0, 00:19:13.495 "ctrlr_loss_timeout_sec": 0, 00:19:13.495 "reconnect_delay_sec": 0, 00:19:13.495 "fast_io_fail_timeout_sec": 0, 00:19:13.495 "disable_auto_failback": false, 00:19:13.495 "generate_uuids": false, 00:19:13.495 "transport_tos": 0, 00:19:13.495 "nvme_error_stat": false, 00:19:13.495 "rdma_srq_size": 0, 00:19:13.495 "io_path_stat": false, 00:19:13.495 "allow_accel_sequence": false, 00:19:13.495 "rdma_max_cq_size": 0, 00:19:13.495 "rdma_cm_event_timeout_ms": 0, 00:19:13.495 "dhchap_digests": [ 00:19:13.495 "sha256", 00:19:13.495 "sha384", 00:19:13.495 "sha512" 00:19:13.495 ], 00:19:13.495 "dhchap_dhgroups": [ 00:19:13.495 "null", 00:19:13.495 "ffdhe2048", 00:19:13.495 "ffdhe3072", 00:19:13.495 "ffdhe4096", 00:19:13.495 "ffdhe6144", 00:19:13.495 "ffdhe8192" 00:19:13.495 ] 00:19:13.495 } 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "method": "bdev_nvme_set_hotplug", 00:19:13.495 "params": { 00:19:13.495 "period_us": 100000, 00:19:13.495 "enable": false 00:19:13.495 } 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "method": "bdev_malloc_create", 00:19:13.495 "params": { 00:19:13.495 "name": "malloc0", 00:19:13.495 "num_blocks": 8192, 00:19:13.495 "block_size": 4096, 00:19:13.495 "physical_block_size": 4096, 00:19:13.495 "uuid": "edbaf66d-1f83-413a-a363-333e6fd109a9", 00:19:13.495 "optimal_io_boundary": 0, 00:19:13.495 "md_size": 0, 00:19:13.495 "dif_type": 0, 00:19:13.495 "dif_is_head_of_md": false, 00:19:13.495 "dif_pi_format": 0 00:19:13.495 } 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "method": "bdev_wait_for_examine" 00:19:13.495 } 00:19:13.495 ] 00:19:13.495 }, 00:19:13.496 { 00:19:13.496 "subsystem": "nbd", 00:19:13.496 "config": [] 00:19:13.496 }, 00:19:13.496 { 00:19:13.496 "subsystem": "scheduler", 00:19:13.496 "config": [ 00:19:13.496 { 00:19:13.496 "method": "framework_set_scheduler", 00:19:13.496 "params": { 00:19:13.496 "name": "static" 00:19:13.496 } 00:19:13.496 } 00:19:13.496 ] 00:19:13.496 }, 00:19:13.496 { 00:19:13.496 "subsystem": "nvmf", 00:19:13.496 "config": [ 00:19:13.496 { 00:19:13.496 "method": "nvmf_set_config", 00:19:13.496 "params": { 00:19:13.496 "discovery_filter": "match_any", 00:19:13.496 "admin_cmd_passthru": { 00:19:13.496 "identify_ctrlr": false 00:19:13.496 }, 00:19:13.496 "dhchap_digests": [ 00:19:13.496 "sha256", 00:19:13.496 "sha384", 00:19:13.496 "sha512" 00:19:13.496 ], 00:19:13.496 "dhchap_dhgroups": [ 00:19:13.496 "null", 00:19:13.496 "ffdhe2048", 00:19:13.496 "ffdhe3072", 00:19:13.496 "ffdhe4096", 00:19:13.496 "ffdhe6144", 00:19:13.496 "ffdhe8192" 00:19:13.496 ] 00:19:13.496 } 00:19:13.496 }, 00:19:13.496 { 00:19:13.496 "method": "nvmf_set_max_subsystems", 00:19:13.496 "params": { 00:19:13.496 "max_subsystems": 1024 00:19:13.496 } 00:19:13.496 }, 00:19:13.496 { 00:19:13.496 "method": "nvmf_set_crdt", 00:19:13.496 "params": { 00:19:13.496 "crdt1": 0, 00:19:13.496 "crdt2": 0, 00:19:13.496 "crdt3": 0 00:19:13.496 } 00:19:13.496 }, 00:19:13.496 { 00:19:13.496 "method": "nvmf_create_transport", 00:19:13.496 "params": { 00:19:13.496 "trtype": "TCP", 00:19:13.496 "max_queue_depth": 128, 00:19:13.496 "max_io_qpairs_per_ctrlr": 127, 00:19:13.496 "in_capsule_data_size": 4096, 00:19:13.496 "max_io_size": 131072, 00:19:13.496 "io_unit_size": 131072, 00:19:13.496 "max_aq_depth": 128, 00:19:13.496 "num_shared_buffers": 511, 00:19:13.496 "buf_cache_size": 4294967295, 00:19:13.496 "dif_insert_or_strip": false, 00:19:13.496 "zcopy": false, 00:19:13.496 "c2h_success": false, 00:19:13.496 "sock_priority": 0, 00:19:13.496 "abort_timeout_sec": 1, 00:19:13.496 "ack_timeout": 0, 00:19:13.496 "data_wr_pool_size": 0 00:19:13.496 } 00:19:13.496 }, 00:19:13.496 { 00:19:13.496 "method": "nvmf_create_subsystem", 00:19:13.496 "params": { 00:19:13.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.496 "allow_any_host": false, 00:19:13.496 "serial_number": "SPDK00000000000001", 00:19:13.496 "model_number": "SPDK bdev Controller", 00:19:13.496 "max_namespaces": 10, 00:19:13.496 "min_cntlid": 1, 00:19:13.496 "max_cntlid": 65519, 00:19:13.496 "ana_reporting": false 00:19:13.496 } 00:19:13.496 }, 00:19:13.496 { 00:19:13.496 "method": "nvmf_subsystem_add_host", 00:19:13.496 "params": { 00:19:13.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.496 "host": "nqn.2016-06.io.spdk:host1", 00:19:13.496 "psk": "key0" 00:19:13.496 } 00:19:13.496 }, 00:19:13.496 { 00:19:13.496 "method": "nvmf_subsystem_add_ns", 00:19:13.496 "params": { 00:19:13.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.496 "namespace": { 00:19:13.496 "nsid": 1, 00:19:13.496 "bdev_name": "malloc0", 00:19:13.496 "nguid": "EDBAF66D1F83413AA363333E6FD109A9", 00:19:13.496 "uuid": "edbaf66d-1f83-413a-a363-333e6fd109a9", 00:19:13.496 "no_auto_visible": false 00:19:13.496 } 00:19:13.496 } 00:19:13.496 }, 00:19:13.496 { 00:19:13.496 "method": "nvmf_subsystem_add_listener", 00:19:13.496 "params": { 00:19:13.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.496 "listen_address": { 00:19:13.496 "trtype": "TCP", 00:19:13.496 "adrfam": "IPv4", 00:19:13.496 "traddr": "10.0.0.3", 00:19:13.496 "trsvcid": "4420" 00:19:13.496 }, 00:19:13.496 "secure_channel": true 00:19:13.496 } 00:19:13.496 } 00:19:13.496 ] 00:19:13.496 } 00:19:13.496 ] 00:19:13.496 }' 00:19:13.496 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:13.755 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:13.755 "subsystems": [ 00:19:13.755 { 00:19:13.755 "subsystem": "keyring", 00:19:13.755 "config": [ 00:19:13.755 { 00:19:13.755 "method": "keyring_file_add_key", 00:19:13.755 "params": { 00:19:13.755 "name": "key0", 00:19:13.755 "path": "/tmp/tmp.LB5BaTl8XS" 00:19:13.755 } 00:19:13.755 } 00:19:13.755 ] 00:19:13.755 }, 00:19:13.755 { 00:19:13.755 "subsystem": "iobuf", 00:19:13.755 "config": [ 00:19:13.755 { 00:19:13.755 "method": "iobuf_set_options", 00:19:13.755 "params": { 00:19:13.755 "small_pool_count": 8192, 00:19:13.755 "large_pool_count": 1024, 00:19:13.755 "small_bufsize": 8192, 00:19:13.755 "large_bufsize": 135168, 00:19:13.755 "enable_numa": false 00:19:13.755 } 00:19:13.755 } 00:19:13.755 ] 00:19:13.755 }, 00:19:13.755 { 00:19:13.755 "subsystem": "sock", 00:19:13.755 "config": [ 00:19:13.755 { 00:19:13.755 "method": "sock_set_default_impl", 00:19:13.755 "params": { 00:19:13.755 "impl_name": "uring" 00:19:13.755 } 00:19:13.755 }, 00:19:13.755 { 00:19:13.755 "method": "sock_impl_set_options", 00:19:13.755 "params": { 00:19:13.755 "impl_name": "ssl", 00:19:13.755 "recv_buf_size": 4096, 00:19:13.755 "send_buf_size": 4096, 00:19:13.755 "enable_recv_pipe": true, 00:19:13.755 "enable_quickack": false, 00:19:13.755 "enable_placement_id": 0, 00:19:13.755 "enable_zerocopy_send_server": true, 00:19:13.755 "enable_zerocopy_send_client": false, 00:19:13.755 "zerocopy_threshold": 0, 00:19:13.755 "tls_version": 0, 00:19:13.755 "enable_ktls": false 00:19:13.755 } 00:19:13.755 }, 00:19:13.755 { 00:19:13.755 "method": "sock_impl_set_options", 00:19:13.755 "params": { 00:19:13.755 "impl_name": "posix", 00:19:13.755 "recv_buf_size": 2097152, 00:19:13.755 "send_buf_size": 2097152, 00:19:13.755 "enable_recv_pipe": true, 00:19:13.755 "enable_quickack": false, 00:19:13.755 "enable_placement_id": 0, 00:19:13.755 "enable_zerocopy_send_server": true, 00:19:13.755 "enable_zerocopy_send_client": false, 00:19:13.755 "zerocopy_threshold": 0, 00:19:13.755 "tls_version": 0, 00:19:13.755 "enable_ktls": false 00:19:13.755 } 00:19:13.755 }, 00:19:13.755 { 00:19:13.755 "method": "sock_impl_set_options", 00:19:13.755 "params": { 00:19:13.755 "impl_name": "uring", 00:19:13.755 "recv_buf_size": 2097152, 00:19:13.755 "send_buf_size": 2097152, 00:19:13.755 "enable_recv_pipe": true, 00:19:13.755 "enable_quickack": false, 00:19:13.755 "enable_placement_id": 0, 00:19:13.755 "enable_zerocopy_send_server": false, 00:19:13.755 "enable_zerocopy_send_client": false, 00:19:13.755 "zerocopy_threshold": 0, 00:19:13.755 "tls_version": 0, 00:19:13.755 "enable_ktls": false 00:19:13.755 } 00:19:13.755 } 00:19:13.755 ] 00:19:13.755 }, 00:19:13.755 { 00:19:13.755 "subsystem": "vmd", 00:19:13.755 "config": [] 00:19:13.755 }, 00:19:13.755 { 00:19:13.755 "subsystem": "accel", 00:19:13.755 "config": [ 00:19:13.755 { 00:19:13.755 "method": "accel_set_options", 00:19:13.755 "params": { 00:19:13.755 "small_cache_size": 128, 00:19:13.755 "large_cache_size": 16, 00:19:13.755 "task_count": 2048, 00:19:13.755 "sequence_count": 2048, 00:19:13.756 "buf_count": 2048 00:19:13.756 } 00:19:13.756 } 00:19:13.756 ] 00:19:13.756 }, 00:19:13.756 { 00:19:13.756 "subsystem": "bdev", 00:19:13.756 "config": [ 00:19:13.756 { 00:19:13.756 "method": "bdev_set_options", 00:19:13.756 "params": { 00:19:13.756 "bdev_io_pool_size": 65535, 00:19:13.756 "bdev_io_cache_size": 256, 00:19:13.756 "bdev_auto_examine": true, 00:19:13.756 "iobuf_small_cache_size": 128, 00:19:13.756 "iobuf_large_cache_size": 16 00:19:13.756 } 00:19:13.756 }, 00:19:13.756 { 00:19:13.756 "method": "bdev_raid_set_options", 00:19:13.756 "params": { 00:19:13.756 "process_window_size_kb": 1024, 00:19:13.756 "process_max_bandwidth_mb_sec": 0 00:19:13.756 } 00:19:13.756 }, 00:19:13.756 { 00:19:13.756 "method": "bdev_iscsi_set_options", 00:19:13.756 "params": { 00:19:13.756 "timeout_sec": 30 00:19:13.756 } 00:19:13.756 }, 00:19:13.756 { 00:19:13.756 "method": "bdev_nvme_set_options", 00:19:13.756 "params": { 00:19:13.756 "action_on_timeout": "none", 00:19:13.756 "timeout_us": 0, 00:19:13.756 "timeout_admin_us": 0, 00:19:13.756 "keep_alive_timeout_ms": 10000, 00:19:13.756 "arbitration_burst": 0, 00:19:13.756 "low_priority_weight": 0, 00:19:13.756 "medium_priority_weight": 0, 00:19:13.756 "high_priority_weight": 0, 00:19:13.756 "nvme_adminq_poll_period_us": 10000, 00:19:13.756 "nvme_ioq_poll_period_us": 0, 00:19:13.756 "io_queue_requests": 512, 00:19:13.756 "delay_cmd_submit": true, 00:19:13.756 "transport_retry_count": 4, 00:19:13.756 "bdev_retry_count": 3, 00:19:13.756 "transport_ack_timeout": 0, 00:19:13.756 "ctrlr_loss_timeout_sec": 0, 00:19:13.756 "reconnect_delay_sec": 0, 00:19:13.756 "fast_io_fail_timeout_sec": 0, 00:19:13.756 "disable_auto_failback": false, 00:19:13.756 "generate_uuids": false, 00:19:13.756 "transport_tos": 0, 00:19:13.756 "nvme_error_stat": false, 00:19:13.756 "rdma_srq_size": 0, 00:19:13.756 "io_path_stat": false, 00:19:13.756 "allow_accel_sequence": false, 00:19:13.756 "rdma_max_cq_size": 0, 00:19:13.756 "rdma_cm_event_timeout_ms": 0, 00:19:13.756 "dhchap_digests": [ 00:19:13.756 "sha256", 00:19:13.756 "sha384", 00:19:13.756 "sha512" 00:19:13.756 ], 00:19:13.756 "dhchap_dhgroups": [ 00:19:13.756 "null", 00:19:13.756 "ffdhe2048", 00:19:13.756 "ffdhe3072", 00:19:13.756 "ffdhe4096", 00:19:13.756 "ffdhe6144", 00:19:13.756 "ffdhe8192" 00:19:13.756 ] 00:19:13.756 } 00:19:13.756 }, 00:19:13.756 { 00:19:13.756 "method": "bdev_nvme_attach_controller", 00:19:13.756 "params": { 00:19:13.756 "name": "TLSTEST", 00:19:13.756 "trtype": "TCP", 00:19:13.756 "adrfam": "IPv4", 00:19:13.756 "traddr": "10.0.0.3", 00:19:13.756 "trsvcid": "4420", 00:19:13.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.756 "prchk_reftag": false, 00:19:13.756 "prchk_guard": false, 00:19:13.756 "ctrlr_loss_timeout_sec": 0, 00:19:13.756 "reconnect_delay_sec": 0, 00:19:13.756 "fast_io_fail_timeout_sec": 0, 00:19:13.756 "psk": "key0", 00:19:13.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.756 "hdgst": false, 00:19:13.756 "ddgst": false, 00:19:13.756 "multipath": "multipath" 00:19:13.756 } 00:19:13.756 }, 00:19:13.756 { 00:19:13.756 "method": "bdev_nvme_set_hotplug", 00:19:13.756 "params": { 00:19:13.756 "period_us": 100000, 00:19:13.756 "enable": false 00:19:13.756 } 00:19:13.756 }, 00:19:13.756 { 00:19:13.756 "method": "bdev_wait_for_examine" 00:19:13.756 } 00:19:13.756 ] 00:19:13.756 }, 00:19:13.756 { 00:19:13.756 "subsystem": "nbd", 00:19:13.756 "config": [] 00:19:13.756 } 00:19:13.756 ] 00:19:13.756 }' 00:19:13.756 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 75629 00:19:13.756 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75629 ']' 00:19:13.756 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75629 00:19:13.756 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:13.756 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.756 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75629 00:19:13.756 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:13.756 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:13.756 killing process with pid 75629 00:19:13.756 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75629' 00:19:13.756 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75629 00:19:13.756 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.756 00:19:13.756 Latency(us) 00:19:13.756 [2024-12-10T11:22:20.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.756 [2024-12-10T11:22:20.582Z] =================================================================================================================== 00:19:13.756 [2024-12-10T11:22:20.582Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:13.756 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75629 00:19:14.690 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 75567 00:19:14.690 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75567 ']' 00:19:14.690 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75567 00:19:14.690 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:14.690 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.690 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75567 00:19:14.690 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:14.690 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:14.690 killing process with pid 75567 00:19:14.690 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75567' 00:19:14.690 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75567 00:19:14.690 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75567 00:19:16.065 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:16.065 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:16.065 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.065 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.065 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:16.065 "subsystems": [ 00:19:16.065 { 00:19:16.065 "subsystem": "keyring", 00:19:16.065 "config": [ 00:19:16.065 { 00:19:16.065 "method": "keyring_file_add_key", 00:19:16.065 "params": { 00:19:16.065 "name": "key0", 00:19:16.065 "path": "/tmp/tmp.LB5BaTl8XS" 00:19:16.065 } 00:19:16.065 } 00:19:16.065 ] 00:19:16.065 }, 00:19:16.065 { 00:19:16.065 "subsystem": "iobuf", 00:19:16.065 "config": [ 00:19:16.065 { 00:19:16.065 "method": "iobuf_set_options", 00:19:16.065 "params": { 00:19:16.065 "small_pool_count": 8192, 00:19:16.065 "large_pool_count": 1024, 00:19:16.065 "small_bufsize": 8192, 00:19:16.065 "large_bufsize": 135168, 00:19:16.065 "enable_numa": false 00:19:16.065 } 00:19:16.065 } 00:19:16.065 ] 00:19:16.065 }, 00:19:16.065 { 00:19:16.065 "subsystem": "sock", 00:19:16.065 "config": [ 00:19:16.065 { 00:19:16.065 "method": "sock_set_default_impl", 00:19:16.065 "params": { 00:19:16.065 "impl_name": "uring" 00:19:16.065 } 00:19:16.065 }, 00:19:16.065 { 00:19:16.065 "method": "sock_impl_set_options", 00:19:16.065 "params": { 00:19:16.065 "impl_name": "ssl", 00:19:16.066 "recv_buf_size": 4096, 00:19:16.066 "send_buf_size": 4096, 00:19:16.066 "enable_recv_pipe": true, 00:19:16.066 "enable_quickack": false, 00:19:16.066 "enable_placement_id": 0, 00:19:16.066 "enable_zerocopy_send_server": true, 00:19:16.066 "enable_zerocopy_send_client": false, 00:19:16.066 "zerocopy_threshold": 0, 00:19:16.066 "tls_version": 0, 00:19:16.066 "enable_ktls": false 00:19:16.066 } 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "method": "sock_impl_set_options", 00:19:16.066 "params": { 00:19:16.066 "impl_name": "posix", 00:19:16.066 "recv_buf_size": 2097152, 00:19:16.066 "send_buf_size": 2097152, 00:19:16.066 "enable_recv_pipe": true, 00:19:16.066 "enable_quickack": false, 00:19:16.066 "enable_placement_id": 0, 00:19:16.066 "enable_zerocopy_send_server": true, 00:19:16.066 "enable_zerocopy_send_client": false, 00:19:16.066 "zerocopy_threshold": 0, 00:19:16.066 "tls_version": 0, 00:19:16.066 "enable_ktls": false 00:19:16.066 } 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "method": "sock_impl_set_options", 00:19:16.066 "params": { 00:19:16.066 "impl_name": "uring", 00:19:16.066 "recv_buf_size": 2097152, 00:19:16.066 "send_buf_size": 2097152, 00:19:16.066 "enable_recv_pipe": true, 00:19:16.066 "enable_quickack": false, 00:19:16.066 "enable_placement_id": 0, 00:19:16.066 "enable_zerocopy_send_server": false, 00:19:16.066 "enable_zerocopy_send_client": false, 00:19:16.066 "zerocopy_threshold": 0, 00:19:16.066 "tls_version": 0, 00:19:16.066 "enable_ktls": false 00:19:16.066 } 00:19:16.066 } 00:19:16.066 ] 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "subsystem": "vmd", 00:19:16.066 "config": [] 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "subsystem": "accel", 00:19:16.066 "config": [ 00:19:16.066 { 00:19:16.066 "method": "accel_set_options", 00:19:16.066 "params": { 00:19:16.066 "small_cache_size": 128, 00:19:16.066 "large_cache_size": 16, 00:19:16.066 "task_count": 2048, 00:19:16.066 "sequence_count": 2048, 00:19:16.066 "buf_count": 2048 00:19:16.066 } 00:19:16.066 } 00:19:16.066 ] 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "subsystem": "bdev", 00:19:16.066 "config": [ 00:19:16.066 { 00:19:16.066 "method": "bdev_set_options", 00:19:16.066 "params": { 00:19:16.066 "bdev_io_pool_size": 65535, 00:19:16.066 "bdev_io_cache_size": 256, 00:19:16.066 "bdev_auto_examine": true, 00:19:16.066 "iobuf_small_cache_size": 128, 00:19:16.066 "iobuf_large_cache_size": 16 00:19:16.066 } 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "method": "bdev_raid_set_options", 00:19:16.066 "params": { 00:19:16.066 "process_window_size_kb": 1024, 00:19:16.066 "process_max_bandwidth_mb_sec": 0 00:19:16.066 } 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "method": "bdev_iscsi_set_options", 00:19:16.066 "params": { 00:19:16.066 "timeout_sec": 30 00:19:16.066 } 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "method": "bdev_nvme_set_options", 00:19:16.066 "params": { 00:19:16.066 "action_on_timeout": "none", 00:19:16.066 "timeout_us": 0, 00:19:16.066 "timeout_admin_us": 0, 00:19:16.066 "keep_alive_timeout_ms": 10000, 00:19:16.066 "arbitration_burst": 0, 00:19:16.066 "low_priority_weight": 0, 00:19:16.066 "medium_priority_weight": 0, 00:19:16.066 "high_priority_weight": 0, 00:19:16.066 "nvme_adminq_poll_period_us": 10000, 00:19:16.066 "nvme_ioq_poll_period_us": 0, 00:19:16.066 "io_queue_requests": 0, 00:19:16.066 "delay_cmd_submit": true, 00:19:16.066 "transport_retry_count": 4, 00:19:16.066 "bdev_retry_count": 3, 00:19:16.066 "transport_ack_timeout": 0, 00:19:16.066 "ctrlr_loss_timeout_sec": 0, 00:19:16.066 "reconnect_delay_sec": 0, 00:19:16.066 "fast_io_fail_timeout_sec": 0, 00:19:16.066 "disable_auto_failback": false, 00:19:16.066 "generate_uuids": false, 00:19:16.066 "transport_tos": 0, 00:19:16.066 "nvme_error_stat": false, 00:19:16.066 "rdma_srq_size": 0, 00:19:16.066 "io_path_stat": false, 00:19:16.066 "allow_accel_sequence": false, 00:19:16.066 "rdma_max_cq_size": 0, 00:19:16.066 "rdma_cm_event_timeout_ms": 0, 00:19:16.066 "dhchap_digests": [ 00:19:16.066 "sha256", 00:19:16.066 "sha384", 00:19:16.066 "sha512" 00:19:16.066 ], 00:19:16.066 "dhchap_dhgroups": [ 00:19:16.066 "null", 00:19:16.066 "ffdhe2048", 00:19:16.066 "ffdhe3072", 00:19:16.066 "ffdhe4096", 00:19:16.066 "ffdhe6144", 00:19:16.066 "ffdhe8192" 00:19:16.066 ] 00:19:16.066 } 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "method": "bdev_nvme_set_hotplug", 00:19:16.066 "params": { 00:19:16.066 "period_us": 100000, 00:19:16.066 "enable": false 00:19:16.066 } 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "method": "bdev_malloc_create", 00:19:16.066 "params": { 00:19:16.066 "name": "malloc0", 00:19:16.066 "num_blocks": 8192, 00:19:16.066 "block_size": 4096, 00:19:16.066 "physical_block_size": 4096, 00:19:16.066 "uuid": "edbaf66d-1f83-413a-a363-333e6fd109a9", 00:19:16.066 "optimal_io_boundary": 0, 00:19:16.066 "md_size": 0, 00:19:16.066 "dif_type": 0, 00:19:16.066 "dif_is_head_of_md": false, 00:19:16.066 "dif_pi_format": 0 00:19:16.066 } 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "method": "bdev_wait_for_examine" 00:19:16.066 } 00:19:16.066 ] 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "subsystem": "nbd", 00:19:16.066 "config": [] 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "subsystem": "scheduler", 00:19:16.066 "config": [ 00:19:16.066 { 00:19:16.066 "method": "framework_set_scheduler", 00:19:16.066 "params": { 00:19:16.066 "name": "static" 00:19:16.066 } 00:19:16.066 } 00:19:16.066 ] 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "subsystem": "nvmf", 00:19:16.066 "config": [ 00:19:16.066 { 00:19:16.066 "method": "nvmf_set_config", 00:19:16.066 "params": { 00:19:16.066 "discovery_filter": "match_any", 00:19:16.066 "admin_cmd_passthru": { 00:19:16.066 "identify_ctrlr": false 00:19:16.066 }, 00:19:16.066 "dhchap_digests": [ 00:19:16.066 "sha256", 00:19:16.066 "sha384", 00:19:16.066 "sha512" 00:19:16.066 ], 00:19:16.066 "dhchap_dhgroups": [ 00:19:16.066 "null", 00:19:16.066 "ffdhe2048", 00:19:16.066 "ffdhe3072", 00:19:16.066 "ffdhe4096", 00:19:16.066 "ffdhe6144", 00:19:16.066 "ffdhe8192" 00:19:16.066 ] 00:19:16.066 } 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "method": "nvmf_set_max_subsystems", 00:19:16.066 "params": { 00:19:16.066 "max_subsystems": 1024 00:19:16.066 } 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "method": "nvmf_set_crdt", 00:19:16.066 "params": { 00:19:16.066 "crdt1": 0, 00:19:16.066 "crdt2": 0, 00:19:16.066 "crdt3": 0 00:19:16.066 } 00:19:16.066 }, 00:19:16.066 { 00:19:16.066 "method": "nvmf_create_transport", 00:19:16.067 "params": { 00:19:16.067 "trtype": "TCP", 00:19:16.067 "max_queue_depth": 128, 00:19:16.067 "max_io_qpairs_per_ctrlr": 127, 00:19:16.067 "in_capsule_data_size": 4096, 00:19:16.067 "max_io_size": 131072, 00:19:16.067 "io_unit_size": 131072, 00:19:16.067 "max_aq_depth": 128, 00:19:16.067 "num_shared_buffers": 511, 00:19:16.067 "buf_cache_size": 4294967295, 00:19:16.067 "dif_insert_or_strip": false, 00:19:16.067 "zcopy": false, 00:19:16.067 "c2h_success": false, 00:19:16.067 "sock_priority": 0, 00:19:16.067 "abort_timeout_sec": 1, 00:19:16.067 "ack_timeout": 0, 00:19:16.067 "data_wr_pool_size": 0 00:19:16.067 } 00:19:16.067 }, 00:19:16.067 { 00:19:16.067 "method": "nvmf_create_subsystem", 00:19:16.067 "params": { 00:19:16.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.067 "allow_any_host": false, 00:19:16.067 "serial_number": "SPDK00000000000001", 00:19:16.067 "model_number": "SPDK bdev Controller", 00:19:16.067 "max_namespaces": 10, 00:19:16.067 "min_cntlid": 1, 00:19:16.067 "max_cntlid": 65519, 00:19:16.067 "ana_reporting": false 00:19:16.067 } 00:19:16.067 }, 00:19:16.067 { 00:19:16.067 "method": "nvmf_subsystem_add_host", 00:19:16.067 "params": { 00:19:16.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.067 "host": "nqn.2016-06.io.spdk:host1", 00:19:16.067 "psk": "key0" 00:19:16.067 } 00:19:16.067 }, 00:19:16.067 { 00:19:16.067 "method": "nvmf_subsystem_add_ns", 00:19:16.067 "params": { 00:19:16.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.067 "namespace": { 00:19:16.067 "nsid": 1, 00:19:16.067 "bdev_name": "malloc0", 00:19:16.067 "nguid": "EDBAF66D1F83413AA363333E6FD109A9", 00:19:16.067 "uuid": "edbaf66d-1f83-413a-a363-333e6fd109a9", 00:19:16.067 "no_auto_visible": false 00:19:16.067 } 00:19:16.067 } 00:19:16.067 }, 00:19:16.067 { 00:19:16.067 "method": "nvmf_subsystem_add_listener", 00:19:16.067 "params": { 00:19:16.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.067 "listen_address": { 00:19:16.067 "trtype": "TCP", 00:19:16.067 "adrfam": "IPv4", 00:19:16.067 "traddr": "10.0.0.3", 00:19:16.067 "trsvcid": "4420" 00:19:16.067 }, 00:19:16.067 "secure_channel": true 00:19:16.067 } 00:19:16.067 } 00:19:16.067 ] 00:19:16.067 } 00:19:16.067 ] 00:19:16.067 }' 00:19:16.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.067 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75703 00:19:16.067 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:16.067 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75703 00:19:16.067 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75703 ']' 00:19:16.067 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.067 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.067 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.067 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.067 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.067 [2024-12-10 11:22:22.632928] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:16.067 [2024-12-10 11:22:22.633119] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.067 [2024-12-10 11:22:22.813891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.326 [2024-12-10 11:22:22.917438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.326 [2024-12-10 11:22:22.917507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.326 [2024-12-10 11:22:22.917527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.326 [2024-12-10 11:22:22.917552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.326 [2024-12-10 11:22:22.917569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.326 [2024-12-10 11:22:22.918842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.584 [2024-12-10 11:22:23.217810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:16.584 [2024-12-10 11:22:23.392622] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.856 [2024-12-10 11:22:23.424539] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:16.856 [2024-12-10 11:22:23.424838] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:16.857 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.857 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:16.857 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:16.857 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:16.857 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.140 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.140 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=75735 00:19:17.140 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 75735 /var/tmp/bdevperf.sock 00:19:17.140 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75735 ']' 00:19:17.140 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.140 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:17.140 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:17.140 "subsystems": [ 00:19:17.140 { 00:19:17.140 "subsystem": "keyring", 00:19:17.140 "config": [ 00:19:17.140 { 00:19:17.140 "method": "keyring_file_add_key", 00:19:17.140 "params": { 00:19:17.140 "name": "key0", 00:19:17.140 "path": "/tmp/tmp.LB5BaTl8XS" 00:19:17.140 } 00:19:17.140 } 00:19:17.140 ] 00:19:17.140 }, 00:19:17.140 { 00:19:17.140 "subsystem": "iobuf", 00:19:17.140 "config": [ 00:19:17.140 { 00:19:17.140 "method": "iobuf_set_options", 00:19:17.140 "params": { 00:19:17.140 "small_pool_count": 8192, 00:19:17.140 "large_pool_count": 1024, 00:19:17.140 "small_bufsize": 8192, 00:19:17.140 "large_bufsize": 135168, 00:19:17.140 "enable_numa": false 00:19:17.140 } 00:19:17.140 } 00:19:17.140 ] 00:19:17.140 }, 00:19:17.140 { 00:19:17.140 "subsystem": "sock", 00:19:17.140 "config": [ 00:19:17.140 { 00:19:17.140 "method": "sock_set_default_impl", 00:19:17.140 "params": { 00:19:17.140 "impl_name": "uring" 00:19:17.140 } 00:19:17.140 }, 00:19:17.140 { 00:19:17.140 "method": "sock_impl_set_options", 00:19:17.140 "params": { 00:19:17.140 "impl_name": "ssl", 00:19:17.140 "recv_buf_size": 4096, 00:19:17.140 "send_buf_size": 4096, 00:19:17.140 "enable_recv_pipe": true, 00:19:17.140 "enable_quickack": false, 00:19:17.140 "enable_placement_id": 0, 00:19:17.140 "enable_zerocopy_send_server": true, 00:19:17.140 "enable_zerocopy_send_client": false, 00:19:17.140 "zerocopy_threshold": 0, 00:19:17.140 "tls_version": 0, 00:19:17.140 "enable_ktls": false 00:19:17.140 } 00:19:17.140 }, 00:19:17.140 { 00:19:17.140 "method": "sock_impl_set_options", 00:19:17.140 "params": { 00:19:17.140 "impl_name": "posix", 00:19:17.140 "recv_buf_size": 2097152, 00:19:17.140 "send_buf_size": 2097152, 00:19:17.140 "enable_recv_pipe": true, 00:19:17.140 "enable_quickack": false, 00:19:17.140 "enable_placement_id": 0, 00:19:17.140 "enable_zerocopy_send_server": true, 00:19:17.140 "enable_zerocopy_send_client": false, 00:19:17.140 "zerocopy_threshold": 0, 00:19:17.140 "tls_version": 0, 00:19:17.141 "enable_ktls": false 00:19:17.141 } 00:19:17.141 }, 00:19:17.141 { 00:19:17.141 "method": "sock_impl_set_options", 00:19:17.141 "params": { 00:19:17.141 "impl_name": "uring", 00:19:17.141 "recv_buf_size": 2097152, 00:19:17.141 "send_buf_size": 2097152, 00:19:17.141 "enable_recv_pipe": true, 00:19:17.141 "enable_quickack": false, 00:19:17.141 "enable_placement_id": 0, 00:19:17.141 "enable_zerocopy_send_server": false, 00:19:17.141 "enable_zerocopy_send_client": false, 00:19:17.141 "zerocopy_threshold": 0, 00:19:17.141 "tls_version": 0, 00:19:17.141 "enable_ktls": false 00:19:17.141 } 00:19:17.141 } 00:19:17.141 ] 00:19:17.141 }, 00:19:17.141 { 00:19:17.141 "subsystem": "vmd", 00:19:17.141 "config": [] 00:19:17.141 }, 00:19:17.141 { 00:19:17.141 "subsystem": "accel", 00:19:17.141 "config": [ 00:19:17.141 { 00:19:17.141 "method": "accel_set_options", 00:19:17.141 "params": { 00:19:17.141 "small_cache_size": 128, 00:19:17.141 "large_cache_size": 16, 00:19:17.141 "task_count": 2048, 00:19:17.141 "sequence_count": 2048, 00:19:17.141 "buf_count": 2048 00:19:17.141 } 00:19:17.141 } 00:19:17.141 ] 00:19:17.141 }, 00:19:17.141 { 00:19:17.141 "subsystem": "bdev", 00:19:17.141 "config": [ 00:19:17.141 { 00:19:17.141 "method": "bdev_set_options", 00:19:17.141 "params": { 00:19:17.141 "bdev_io_pool_size": 65535, 00:19:17.141 "bdev_io_cache_size": 256, 00:19:17.141 "bdev_auto_examine": true, 00:19:17.141 "iobuf_small_cache_size": 128, 00:19:17.141 "iobuf_large_cache_size": 16 00:19:17.141 } 00:19:17.141 }, 00:19:17.141 { 00:19:17.141 "method": "bdev_raid_set_options", 00:19:17.141 "params": { 00:19:17.141 "process_window_size_kb": 1024, 00:19:17.141 "process_max_bandwidth_mb_sec": 0 00:19:17.141 } 00:19:17.141 }, 00:19:17.141 { 00:19:17.141 "method": "bdev_iscsi_set_options", 00:19:17.141 "params": { 00:19:17.141 "timeout_sec": 30 00:19:17.141 } 00:19:17.141 }, 00:19:17.141 { 00:19:17.141 "method": "bdev_nvme_set_options", 00:19:17.141 "params": { 00:19:17.141 "action_on_timeout": "none", 00:19:17.141 "timeout_us": 0, 00:19:17.141 "timeout_admin_us": 0, 00:19:17.141 "keep_alive_timeout_ms": 10000, 00:19:17.141 "arbitration_burst": 0, 00:19:17.141 "low_priority_weight": 0, 00:19:17.141 "medium_priority_weight": 0, 00:19:17.141 "high_priority_weight": 0, 00:19:17.141 "nvme_adminq_poll_period_us": 10000, 00:19:17.141 "nvme_ioq_poll_period_us": 0, 00:19:17.141 "io_queue_requests": 512, 00:19:17.141 "delay_cmd_submit": true, 00:19:17.141 "transport_retry_count": 4, 00:19:17.141 "bdev_retry_count": 3, 00:19:17.141 "transport_ack_timeout": 0, 00:19:17.141 "ctrlr_loss_timeout_sec": 0, 00:19:17.141 "reconnect_delay_sec": 0, 00:19:17.141 "fast_io_fail_timeout_sec": 0, 00:19:17.141 "disable_auto_failback": false, 00:19:17.141 "generate_uuids": false, 00:19:17.141 "transport_tos": 0, 00:19:17.141 "nvme_error_stat": false, 00:19:17.141 "rdma_srq_size": 0, 00:19:17.141 "io_path_stat": false, 00:19:17.141 "allow_accel_sequence": false, 00:19:17.141 "rdma_max_cq_size": 0, 00:19:17.141 "rdma_cm_event_timeout_ms": 0, 00:19:17.141 "dhchap_digests": [ 00:19:17.141 "sha256", 00:19:17.141 "sha384", 00:19:17.141 "sha512" 00:19:17.141 ], 00:19:17.141 "dhchap_dhgroups": [ 00:19:17.141 "null", 00:19:17.141 "ffdhe2048", 00:19:17.141 "ffdhe3072", 00:19:17.141 "ffdhe4096", 00:19:17.141 "ffdhe6144", 00:19:17.141 "ffdhe8192" 00:19:17.141 ] 00:19:17.141 } 00:19:17.141 }, 00:19:17.141 { 00:19:17.141 "method": "bdev_nvme_attach_controller", 00:19:17.141 "params": { 00:19:17.141 "name": "TLSTEST", 00:19:17.141 "trtype": "TCP", 00:19:17.141 "adrfam": "IPv4", 00:19:17.141 "traddr": "10.0.0.3", 00:19:17.141 "trsvcid": "4420", 00:19:17.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.141 "prchk_reftag": false, 00:19:17.141 "prchk_guard": false, 00:19:17.141 "ctrlr_loss_timeout_sec": 0, 00:19:17.141 "reconnect_delay_sec": 0, 00:19:17.141 "fast_io_fail_timeout_sec": 0, 00:19:17.141 "psk": "key0", 00:19:17.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.141 "hdgst": false, 00:19:17.141 "ddgst": false, 00:19:17.141 "multipath": "multipath" 00:19:17.141 } 00:19:17.141 }, 00:19:17.141 { 00:19:17.141 "method": "bdev_nvme_set_hotplug", 00:19:17.141 "params": { 00:19:17.141 "period_us": 100000, 00:19:17.141 "enable": false 00:19:17.141 } 00:19:17.141 }, 00:19:17.141 { 00:19:17.141 "method": "bdev_wait_for_examine" 00:19:17.141 } 00:19:17.141 ] 00:19:17.141 }, 00:19:17.141 { 00:19:17.141 "subsystem": "nbd", 00:19:17.141 "config": [] 00:19:17.141 } 00:19:17.141 ] 00:19:17.141 }' 00:19:17.141 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.141 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.141 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.141 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.141 [2024-12-10 11:22:23.799814] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:17.141 [2024-12-10 11:22:23.800504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75735 ] 00:19:17.400 [2024-12-10 11:22:23.981637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.400 [2024-12-10 11:22:24.106341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.659 [2024-12-10 11:22:24.370706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:17.918 [2024-12-10 11:22:24.490188] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:18.176 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.176 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:18.177 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:18.177 Running I/O for 10 seconds... 00:19:20.491 2842.00 IOPS, 11.10 MiB/s [2024-12-10T11:22:27.885Z] 2862.50 IOPS, 11.18 MiB/s [2024-12-10T11:22:29.261Z] 2864.00 IOPS, 11.19 MiB/s [2024-12-10T11:22:29.892Z] 2852.50 IOPS, 11.14 MiB/s [2024-12-10T11:22:31.267Z] 2840.80 IOPS, 11.10 MiB/s [2024-12-10T11:22:32.202Z] 2855.17 IOPS, 11.15 MiB/s [2024-12-10T11:22:33.137Z] 2863.29 IOPS, 11.18 MiB/s [2024-12-10T11:22:34.072Z] 2868.00 IOPS, 11.20 MiB/s [2024-12-10T11:22:35.009Z] 2872.89 IOPS, 11.22 MiB/s [2024-12-10T11:22:35.009Z] 2877.00 IOPS, 11.24 MiB/s 00:19:28.183 Latency(us) 00:19:28.183 [2024-12-10T11:22:35.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.183 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:28.183 Verification LBA range: start 0x0 length 0x2000 00:19:28.183 TLSTESTn1 : 10.02 2883.07 11.26 0.00 0.00 44306.91 8400.52 48139.17 00:19:28.183 [2024-12-10T11:22:35.009Z] =================================================================================================================== 00:19:28.183 [2024-12-10T11:22:35.009Z] Total : 2883.07 11.26 0.00 0.00 44306.91 8400.52 48139.17 00:19:28.183 { 00:19:28.183 "results": [ 00:19:28.183 { 00:19:28.183 "job": "TLSTESTn1", 00:19:28.183 "core_mask": "0x4", 00:19:28.183 "workload": "verify", 00:19:28.183 "status": "finished", 00:19:28.183 "verify_range": { 00:19:28.183 "start": 0, 00:19:28.183 "length": 8192 00:19:28.183 }, 00:19:28.183 "queue_depth": 128, 00:19:28.183 "io_size": 4096, 00:19:28.183 "runtime": 10.02301, 00:19:28.183 "iops": 2883.066064984471, 00:19:28.183 "mibps": 11.26197681634559, 00:19:28.183 "io_failed": 0, 00:19:28.183 "io_timeout": 0, 00:19:28.183 "avg_latency_us": 44306.908533191556, 00:19:28.183 "min_latency_us": 8400.523636363636, 00:19:28.183 "max_latency_us": 48139.170909090906 00:19:28.183 } 00:19:28.183 ], 00:19:28.183 "core_count": 1 00:19:28.183 } 00:19:28.183 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:28.183 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 75735 00:19:28.183 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75735 ']' 00:19:28.183 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75735 00:19:28.183 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:28.183 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.183 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75735 00:19:28.183 killing process with pid 75735 00:19:28.183 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.183 00:19:28.183 Latency(us) 00:19:28.183 [2024-12-10T11:22:35.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.183 [2024-12-10T11:22:35.009Z] =================================================================================================================== 00:19:28.183 [2024-12-10T11:22:35.009Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.183 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:28.183 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:28.183 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75735' 00:19:28.183 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75735 00:19:28.183 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75735 00:19:29.155 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 75703 00:19:29.155 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75703 ']' 00:19:29.155 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75703 00:19:29.155 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:29.413 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.413 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75703 00:19:29.413 killing process with pid 75703 00:19:29.413 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:29.413 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:29.413 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75703' 00:19:29.413 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75703 00:19:29.413 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75703 00:19:30.349 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:30.349 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:30.349 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.349 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.349 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75893 00:19:30.349 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75893 00:19:30.349 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75893 ']' 00:19:30.349 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.349 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.349 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:30.349 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.349 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.349 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.606 [2024-12-10 11:22:37.260963] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:30.606 [2024-12-10 11:22:37.261130] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.865 [2024-12-10 11:22:37.440259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.865 [2024-12-10 11:22:37.542418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.865 [2024-12-10 11:22:37.542490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.865 [2024-12-10 11:22:37.542510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.865 [2024-12-10 11:22:37.542534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.865 [2024-12-10 11:22:37.542549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.865 [2024-12-10 11:22:37.543890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.123 [2024-12-10 11:22:37.728886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:31.382 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.382 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:31.382 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:31.382 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.382 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.641 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.641 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.LB5BaTl8XS 00:19:31.641 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.LB5BaTl8XS 00:19:31.641 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:31.898 [2024-12-10 11:22:38.513750] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.898 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:32.156 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:32.415 [2024-12-10 11:22:39.021948] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:32.415 [2024-12-10 11:22:39.022258] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:32.415 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:32.673 malloc0 00:19:32.673 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:32.931 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.LB5BaTl8XS 00:19:33.189 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:33.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.447 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=75953 00:19:33.447 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:33.447 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:33.447 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 75953 /var/tmp/bdevperf.sock 00:19:33.447 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75953 ']' 00:19:33.447 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.447 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.447 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.447 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.447 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.706 [2024-12-10 11:22:40.281205] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:33.706 [2024-12-10 11:22:40.281377] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75953 ] 00:19:33.706 [2024-12-10 11:22:40.460653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.965 [2024-12-10 11:22:40.591462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.223 [2024-12-10 11:22:40.790312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:34.482 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.482 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:34.482 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LB5BaTl8XS 00:19:34.797 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:35.055 [2024-12-10 11:22:41.698514] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.055 nvme0n1 00:19:35.055 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:35.314 Running I/O for 1 seconds... 00:19:36.250 2817.00 IOPS, 11.00 MiB/s 00:19:36.250 Latency(us) 00:19:36.250 [2024-12-10T11:22:43.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.250 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:36.250 Verification LBA range: start 0x0 length 0x2000 00:19:36.250 nvme0n1 : 1.02 2876.87 11.24 0.00 0.00 43863.01 297.89 27644.28 00:19:36.250 [2024-12-10T11:22:43.076Z] =================================================================================================================== 00:19:36.250 [2024-12-10T11:22:43.076Z] Total : 2876.87 11.24 0.00 0.00 43863.01 297.89 27644.28 00:19:36.250 { 00:19:36.250 "results": [ 00:19:36.250 { 00:19:36.250 "job": "nvme0n1", 00:19:36.250 "core_mask": "0x2", 00:19:36.250 "workload": "verify", 00:19:36.250 "status": "finished", 00:19:36.250 "verify_range": { 00:19:36.250 "start": 0, 00:19:36.250 "length": 8192 00:19:36.251 }, 00:19:36.251 "queue_depth": 128, 00:19:36.251 "io_size": 4096, 00:19:36.251 "runtime": 1.023681, 00:19:36.251 "iops": 2876.872775796366, 00:19:36.251 "mibps": 11.237784280454555, 00:19:36.251 "io_failed": 0, 00:19:36.251 "io_timeout": 0, 00:19:36.251 "avg_latency_us": 43863.00538725112, 00:19:36.251 "min_latency_us": 297.8909090909091, 00:19:36.251 "max_latency_us": 27644.276363636363 00:19:36.251 } 00:19:36.251 ], 00:19:36.251 "core_count": 1 00:19:36.251 } 00:19:36.251 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 75953 00:19:36.251 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75953 ']' 00:19:36.251 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75953 00:19:36.251 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:36.251 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.251 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75953 00:19:36.251 killing process with pid 75953 00:19:36.251 Received shutdown signal, test time was about 1.000000 seconds 00:19:36.251 00:19:36.251 Latency(us) 00:19:36.251 [2024-12-10T11:22:43.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.251 [2024-12-10T11:22:43.077Z] =================================================================================================================== 00:19:36.251 [2024-12-10T11:22:43.077Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:36.251 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:36.251 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:36.251 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75953' 00:19:36.251 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75953 00:19:36.251 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75953 00:19:37.187 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 75893 00:19:37.187 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75893 ']' 00:19:37.187 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75893 00:19:37.187 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:37.187 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.187 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75893 00:19:37.187 killing process with pid 75893 00:19:37.187 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:37.187 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:37.187 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75893' 00:19:37.187 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75893 00:19:37.187 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75893 00:19:38.562 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:38.562 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:38.562 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:38.562 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.562 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=76023 00:19:38.562 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 76023 00:19:38.562 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:38.562 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76023 ']' 00:19:38.562 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.562 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.562 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.562 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.562 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.562 [2024-12-10 11:22:45.195521] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:38.562 [2024-12-10 11:22:45.195689] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.562 [2024-12-10 11:22:45.369966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.820 [2024-12-10 11:22:45.473681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.820 [2024-12-10 11:22:45.473768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.820 [2024-12-10 11:22:45.473806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.820 [2024-12-10 11:22:45.473830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.820 [2024-12-10 11:22:45.473847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.820 [2024-12-10 11:22:45.475086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.079 [2024-12-10 11:22:45.658272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.646 [2024-12-10 11:22:46.244076] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.646 malloc0 00:19:39.646 [2024-12-10 11:22:46.296447] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:39.646 [2024-12-10 11:22:46.296759] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:39.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=76056 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 76056 /var/tmp/bdevperf.sock 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76056 ']' 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.646 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.646 [2024-12-10 11:22:46.434225] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:39.646 [2024-12-10 11:22:46.434411] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76056 ] 00:19:39.904 [2024-12-10 11:22:46.618700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.904 [2024-12-10 11:22:46.722243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.163 [2024-12-10 11:22:46.902910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:40.729 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.729 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:40.729 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LB5BaTl8XS 00:19:40.989 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:41.247 [2024-12-10 11:22:47.915876] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:41.247 nvme0n1 00:19:41.247 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:41.505 Running I/O for 1 seconds... 00:19:42.440 2816.00 IOPS, 11.00 MiB/s 00:19:42.440 Latency(us) 00:19:42.440 [2024-12-10T11:22:49.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.440 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:42.440 Verification LBA range: start 0x0 length 0x2000 00:19:42.440 nvme0n1 : 1.04 2840.84 11.10 0.00 0.00 44435.36 13166.78 31218.97 00:19:42.440 [2024-12-10T11:22:49.266Z] =================================================================================================================== 00:19:42.440 [2024-12-10T11:22:49.266Z] Total : 2840.84 11.10 0.00 0.00 44435.36 13166.78 31218.97 00:19:42.440 { 00:19:42.440 "results": [ 00:19:42.440 { 00:19:42.440 "job": "nvme0n1", 00:19:42.440 "core_mask": "0x2", 00:19:42.440 "workload": "verify", 00:19:42.440 "status": "finished", 00:19:42.440 "verify_range": { 00:19:42.440 "start": 0, 00:19:42.440 "length": 8192 00:19:42.440 }, 00:19:42.440 "queue_depth": 128, 00:19:42.440 "io_size": 4096, 00:19:42.440 "runtime": 1.036313, 00:19:42.440 "iops": 2840.840556858787, 00:19:42.440 "mibps": 11.097033425229636, 00:19:42.440 "io_failed": 0, 00:19:42.440 "io_timeout": 0, 00:19:42.440 "avg_latency_us": 44435.356837944666, 00:19:42.440 "min_latency_us": 13166.778181818181, 00:19:42.440 "max_latency_us": 31218.967272727274 00:19:42.440 } 00:19:42.440 ], 00:19:42.440 "core_count": 1 00:19:42.440 } 00:19:42.440 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:42.440 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.440 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.700 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.700 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:42.700 "subsystems": [ 00:19:42.700 { 00:19:42.700 "subsystem": "keyring", 00:19:42.700 "config": [ 00:19:42.700 { 00:19:42.700 "method": "keyring_file_add_key", 00:19:42.700 "params": { 00:19:42.700 "name": "key0", 00:19:42.700 "path": "/tmp/tmp.LB5BaTl8XS" 00:19:42.700 } 00:19:42.700 } 00:19:42.700 ] 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "subsystem": "iobuf", 00:19:42.700 "config": [ 00:19:42.700 { 00:19:42.700 "method": "iobuf_set_options", 00:19:42.700 "params": { 00:19:42.700 "small_pool_count": 8192, 00:19:42.700 "large_pool_count": 1024, 00:19:42.700 "small_bufsize": 8192, 00:19:42.700 "large_bufsize": 135168, 00:19:42.700 "enable_numa": false 00:19:42.700 } 00:19:42.700 } 00:19:42.700 ] 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "subsystem": "sock", 00:19:42.700 "config": [ 00:19:42.700 { 00:19:42.700 "method": "sock_set_default_impl", 00:19:42.700 "params": { 00:19:42.700 "impl_name": "uring" 00:19:42.700 } 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "method": "sock_impl_set_options", 00:19:42.700 "params": { 00:19:42.700 "impl_name": "ssl", 00:19:42.700 "recv_buf_size": 4096, 00:19:42.700 "send_buf_size": 4096, 00:19:42.700 "enable_recv_pipe": true, 00:19:42.700 "enable_quickack": false, 00:19:42.700 "enable_placement_id": 0, 00:19:42.700 "enable_zerocopy_send_server": true, 00:19:42.700 "enable_zerocopy_send_client": false, 00:19:42.700 "zerocopy_threshold": 0, 00:19:42.700 "tls_version": 0, 00:19:42.700 "enable_ktls": false 00:19:42.700 } 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "method": "sock_impl_set_options", 00:19:42.700 "params": { 00:19:42.700 "impl_name": "posix", 00:19:42.700 "recv_buf_size": 2097152, 00:19:42.700 "send_buf_size": 2097152, 00:19:42.700 "enable_recv_pipe": true, 00:19:42.700 "enable_quickack": false, 00:19:42.700 "enable_placement_id": 0, 00:19:42.700 "enable_zerocopy_send_server": true, 00:19:42.700 "enable_zerocopy_send_client": false, 00:19:42.700 "zerocopy_threshold": 0, 00:19:42.700 "tls_version": 0, 00:19:42.700 "enable_ktls": false 00:19:42.700 } 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "method": "sock_impl_set_options", 00:19:42.700 "params": { 00:19:42.700 "impl_name": "uring", 00:19:42.700 "recv_buf_size": 2097152, 00:19:42.700 "send_buf_size": 2097152, 00:19:42.700 "enable_recv_pipe": true, 00:19:42.700 "enable_quickack": false, 00:19:42.700 "enable_placement_id": 0, 00:19:42.700 "enable_zerocopy_send_server": false, 00:19:42.700 "enable_zerocopy_send_client": false, 00:19:42.700 "zerocopy_threshold": 0, 00:19:42.700 "tls_version": 0, 00:19:42.700 "enable_ktls": false 00:19:42.700 } 00:19:42.700 } 00:19:42.700 ] 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "subsystem": "vmd", 00:19:42.700 "config": [] 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "subsystem": "accel", 00:19:42.700 "config": [ 00:19:42.700 { 00:19:42.700 "method": "accel_set_options", 00:19:42.700 "params": { 00:19:42.700 "small_cache_size": 128, 00:19:42.700 "large_cache_size": 16, 00:19:42.700 "task_count": 2048, 00:19:42.700 "sequence_count": 2048, 00:19:42.700 "buf_count": 2048 00:19:42.700 } 00:19:42.700 } 00:19:42.700 ] 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "subsystem": "bdev", 00:19:42.700 "config": [ 00:19:42.700 { 00:19:42.700 "method": "bdev_set_options", 00:19:42.700 "params": { 00:19:42.700 "bdev_io_pool_size": 65535, 00:19:42.700 "bdev_io_cache_size": 256, 00:19:42.700 "bdev_auto_examine": true, 00:19:42.700 "iobuf_small_cache_size": 128, 00:19:42.700 "iobuf_large_cache_size": 16 00:19:42.700 } 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "method": "bdev_raid_set_options", 00:19:42.700 "params": { 00:19:42.700 "process_window_size_kb": 1024, 00:19:42.700 "process_max_bandwidth_mb_sec": 0 00:19:42.700 } 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "method": "bdev_iscsi_set_options", 00:19:42.700 "params": { 00:19:42.700 "timeout_sec": 30 00:19:42.700 } 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "method": "bdev_nvme_set_options", 00:19:42.700 "params": { 00:19:42.700 "action_on_timeout": "none", 00:19:42.700 "timeout_us": 0, 00:19:42.700 "timeout_admin_us": 0, 00:19:42.700 "keep_alive_timeout_ms": 10000, 00:19:42.700 "arbitration_burst": 0, 00:19:42.700 "low_priority_weight": 0, 00:19:42.700 "medium_priority_weight": 0, 00:19:42.700 "high_priority_weight": 0, 00:19:42.700 "nvme_adminq_poll_period_us": 10000, 00:19:42.700 "nvme_ioq_poll_period_us": 0, 00:19:42.700 "io_queue_requests": 0, 00:19:42.700 "delay_cmd_submit": true, 00:19:42.700 "transport_retry_count": 4, 00:19:42.700 "bdev_retry_count": 3, 00:19:42.700 "transport_ack_timeout": 0, 00:19:42.700 "ctrlr_loss_timeout_sec": 0, 00:19:42.700 "reconnect_delay_sec": 0, 00:19:42.700 "fast_io_fail_timeout_sec": 0, 00:19:42.700 "disable_auto_failback": false, 00:19:42.700 "generate_uuids": false, 00:19:42.700 "transport_tos": 0, 00:19:42.700 "nvme_error_stat": false, 00:19:42.700 "rdma_srq_size": 0, 00:19:42.700 "io_path_stat": false, 00:19:42.700 "allow_accel_sequence": false, 00:19:42.700 "rdma_max_cq_size": 0, 00:19:42.700 "rdma_cm_event_timeout_ms": 0, 00:19:42.700 "dhchap_digests": [ 00:19:42.700 "sha256", 00:19:42.700 "sha384", 00:19:42.700 "sha512" 00:19:42.700 ], 00:19:42.700 "dhchap_dhgroups": [ 00:19:42.700 "null", 00:19:42.700 "ffdhe2048", 00:19:42.700 "ffdhe3072", 00:19:42.700 "ffdhe4096", 00:19:42.700 "ffdhe6144", 00:19:42.700 "ffdhe8192" 00:19:42.700 ] 00:19:42.700 } 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "method": "bdev_nvme_set_hotplug", 00:19:42.700 "params": { 00:19:42.700 "period_us": 100000, 00:19:42.700 "enable": false 00:19:42.700 } 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "method": "bdev_malloc_create", 00:19:42.700 "params": { 00:19:42.700 "name": "malloc0", 00:19:42.700 "num_blocks": 8192, 00:19:42.700 "block_size": 4096, 00:19:42.700 "physical_block_size": 4096, 00:19:42.700 "uuid": "3b607e07-0633-4404-9348-1b34435794fa", 00:19:42.700 "optimal_io_boundary": 0, 00:19:42.700 "md_size": 0, 00:19:42.700 "dif_type": 0, 00:19:42.700 "dif_is_head_of_md": false, 00:19:42.700 "dif_pi_format": 0 00:19:42.700 } 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "method": "bdev_wait_for_examine" 00:19:42.700 } 00:19:42.700 ] 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "subsystem": "nbd", 00:19:42.700 "config": [] 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "subsystem": "scheduler", 00:19:42.700 "config": [ 00:19:42.700 { 00:19:42.700 "method": "framework_set_scheduler", 00:19:42.700 "params": { 00:19:42.700 "name": "static" 00:19:42.700 } 00:19:42.700 } 00:19:42.700 ] 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "subsystem": "nvmf", 00:19:42.700 "config": [ 00:19:42.700 { 00:19:42.700 "method": "nvmf_set_config", 00:19:42.700 "params": { 00:19:42.700 "discovery_filter": "match_any", 00:19:42.700 "admin_cmd_passthru": { 00:19:42.700 "identify_ctrlr": false 00:19:42.700 }, 00:19:42.700 "dhchap_digests": [ 00:19:42.700 "sha256", 00:19:42.700 "sha384", 00:19:42.700 "sha512" 00:19:42.700 ], 00:19:42.700 "dhchap_dhgroups": [ 00:19:42.700 "null", 00:19:42.700 "ffdhe2048", 00:19:42.700 "ffdhe3072", 00:19:42.700 "ffdhe4096", 00:19:42.700 "ffdhe6144", 00:19:42.700 "ffdhe8192" 00:19:42.700 ] 00:19:42.700 } 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "method": "nvmf_set_max_subsystems", 00:19:42.700 "params": { 00:19:42.700 "max_subsystems": 1024 00:19:42.700 } 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "method": "nvmf_set_crdt", 00:19:42.700 "params": { 00:19:42.700 "crdt1": 0, 00:19:42.700 "crdt2": 0, 00:19:42.700 "crdt3": 0 00:19:42.700 } 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "method": "nvmf_create_transport", 00:19:42.700 "params": { 00:19:42.700 "trtype": "TCP", 00:19:42.700 "max_queue_depth": 128, 00:19:42.700 "max_io_qpairs_per_ctrlr": 127, 00:19:42.700 "in_capsule_data_size": 4096, 00:19:42.700 "max_io_size": 131072, 00:19:42.700 "io_unit_size": 131072, 00:19:42.700 "max_aq_depth": 128, 00:19:42.700 "num_shared_buffers": 511, 00:19:42.700 "buf_cache_size": 4294967295, 00:19:42.700 "dif_insert_or_strip": false, 00:19:42.700 "zcopy": false, 00:19:42.700 "c2h_success": false, 00:19:42.700 "sock_priority": 0, 00:19:42.700 "abort_timeout_sec": 1, 00:19:42.700 "ack_timeout": 0, 00:19:42.700 "data_wr_pool_size": 0 00:19:42.700 } 00:19:42.700 }, 00:19:42.700 { 00:19:42.700 "method": "nvmf_create_subsystem", 00:19:42.700 "params": { 00:19:42.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.701 "allow_any_host": false, 00:19:42.701 "serial_number": "00000000000000000000", 00:19:42.701 "model_number": "SPDK bdev Controller", 00:19:42.701 "max_namespaces": 32, 00:19:42.701 "min_cntlid": 1, 00:19:42.701 "max_cntlid": 65519, 00:19:42.701 "ana_reporting": false 00:19:42.701 } 00:19:42.701 }, 00:19:42.701 { 00:19:42.701 "method": "nvmf_subsystem_add_host", 00:19:42.701 "params": { 00:19:42.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.701 "host": "nqn.2016-06.io.spdk:host1", 00:19:42.701 "psk": "key0" 00:19:42.701 } 00:19:42.701 }, 00:19:42.701 { 00:19:42.701 "method": "nvmf_subsystem_add_ns", 00:19:42.701 "params": { 00:19:42.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.701 "namespace": { 00:19:42.701 "nsid": 1, 00:19:42.701 "bdev_name": "malloc0", 00:19:42.701 "nguid": "3B607E070633440493481B34435794FA", 00:19:42.701 "uuid": "3b607e07-0633-4404-9348-1b34435794fa", 00:19:42.701 "no_auto_visible": false 00:19:42.701 } 00:19:42.701 } 00:19:42.701 }, 00:19:42.701 { 00:19:42.701 "method": "nvmf_subsystem_add_listener", 00:19:42.701 "params": { 00:19:42.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.701 "listen_address": { 00:19:42.701 "trtype": "TCP", 00:19:42.701 "adrfam": "IPv4", 00:19:42.701 "traddr": "10.0.0.3", 00:19:42.701 "trsvcid": "4420" 00:19:42.701 }, 00:19:42.701 "secure_channel": false, 00:19:42.701 "sock_impl": "ssl" 00:19:42.701 } 00:19:42.701 } 00:19:42.701 ] 00:19:42.701 } 00:19:42.701 ] 00:19:42.701 }' 00:19:42.701 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:42.960 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:42.960 "subsystems": [ 00:19:42.960 { 00:19:42.960 "subsystem": "keyring", 00:19:42.960 "config": [ 00:19:42.960 { 00:19:42.960 "method": "keyring_file_add_key", 00:19:42.960 "params": { 00:19:42.960 "name": "key0", 00:19:42.960 "path": "/tmp/tmp.LB5BaTl8XS" 00:19:42.960 } 00:19:42.960 } 00:19:42.960 ] 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "subsystem": "iobuf", 00:19:42.960 "config": [ 00:19:42.960 { 00:19:42.960 "method": "iobuf_set_options", 00:19:42.960 "params": { 00:19:42.960 "small_pool_count": 8192, 00:19:42.960 "large_pool_count": 1024, 00:19:42.960 "small_bufsize": 8192, 00:19:42.960 "large_bufsize": 135168, 00:19:42.960 "enable_numa": false 00:19:42.960 } 00:19:42.960 } 00:19:42.960 ] 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "subsystem": "sock", 00:19:42.960 "config": [ 00:19:42.960 { 00:19:42.960 "method": "sock_set_default_impl", 00:19:42.960 "params": { 00:19:42.960 "impl_name": "uring" 00:19:42.960 } 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "method": "sock_impl_set_options", 00:19:42.960 "params": { 00:19:42.960 "impl_name": "ssl", 00:19:42.960 "recv_buf_size": 4096, 00:19:42.960 "send_buf_size": 4096, 00:19:42.960 "enable_recv_pipe": true, 00:19:42.960 "enable_quickack": false, 00:19:42.960 "enable_placement_id": 0, 00:19:42.960 "enable_zerocopy_send_server": true, 00:19:42.960 "enable_zerocopy_send_client": false, 00:19:42.960 "zerocopy_threshold": 0, 00:19:42.960 "tls_version": 0, 00:19:42.960 "enable_ktls": false 00:19:42.960 } 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "method": "sock_impl_set_options", 00:19:42.960 "params": { 00:19:42.960 "impl_name": "posix", 00:19:42.960 "recv_buf_size": 2097152, 00:19:42.960 "send_buf_size": 2097152, 00:19:42.960 "enable_recv_pipe": true, 00:19:42.960 "enable_quickack": false, 00:19:42.960 "enable_placement_id": 0, 00:19:42.960 "enable_zerocopy_send_server": true, 00:19:42.960 "enable_zerocopy_send_client": false, 00:19:42.960 "zerocopy_threshold": 0, 00:19:42.960 "tls_version": 0, 00:19:42.960 "enable_ktls": false 00:19:42.960 } 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "method": "sock_impl_set_options", 00:19:42.960 "params": { 00:19:42.960 "impl_name": "uring", 00:19:42.960 "recv_buf_size": 2097152, 00:19:42.960 "send_buf_size": 2097152, 00:19:42.960 "enable_recv_pipe": true, 00:19:42.960 "enable_quickack": false, 00:19:42.960 "enable_placement_id": 0, 00:19:42.960 "enable_zerocopy_send_server": false, 00:19:42.960 "enable_zerocopy_send_client": false, 00:19:42.960 "zerocopy_threshold": 0, 00:19:42.960 "tls_version": 0, 00:19:42.960 "enable_ktls": false 00:19:42.960 } 00:19:42.960 } 00:19:42.960 ] 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "subsystem": "vmd", 00:19:42.960 "config": [] 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "subsystem": "accel", 00:19:42.960 "config": [ 00:19:42.960 { 00:19:42.960 "method": "accel_set_options", 00:19:42.960 "params": { 00:19:42.960 "small_cache_size": 128, 00:19:42.960 "large_cache_size": 16, 00:19:42.960 "task_count": 2048, 00:19:42.960 "sequence_count": 2048, 00:19:42.960 "buf_count": 2048 00:19:42.960 } 00:19:42.960 } 00:19:42.960 ] 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "subsystem": "bdev", 00:19:42.960 "config": [ 00:19:42.960 { 00:19:42.960 "method": "bdev_set_options", 00:19:42.960 "params": { 00:19:42.960 "bdev_io_pool_size": 65535, 00:19:42.960 "bdev_io_cache_size": 256, 00:19:42.960 "bdev_auto_examine": true, 00:19:42.960 "iobuf_small_cache_size": 128, 00:19:42.960 "iobuf_large_cache_size": 16 00:19:42.960 } 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "method": "bdev_raid_set_options", 00:19:42.960 "params": { 00:19:42.960 "process_window_size_kb": 1024, 00:19:42.960 "process_max_bandwidth_mb_sec": 0 00:19:42.960 } 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "method": "bdev_iscsi_set_options", 00:19:42.960 "params": { 00:19:42.960 "timeout_sec": 30 00:19:42.960 } 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "method": "bdev_nvme_set_options", 00:19:42.960 "params": { 00:19:42.960 "action_on_timeout": "none", 00:19:42.960 "timeout_us": 0, 00:19:42.960 "timeout_admin_us": 0, 00:19:42.960 "keep_alive_timeout_ms": 10000, 00:19:42.960 "arbitration_burst": 0, 00:19:42.960 "low_priority_weight": 0, 00:19:42.960 "medium_priority_weight": 0, 00:19:42.960 "high_priority_weight": 0, 00:19:42.960 "nvme_adminq_poll_period_us": 10000, 00:19:42.960 "nvme_ioq_poll_period_us": 0, 00:19:42.960 "io_queue_requests": 512, 00:19:42.960 "delay_cmd_submit": true, 00:19:42.960 "transport_retry_count": 4, 00:19:42.960 "bdev_retry_count": 3, 00:19:42.960 "transport_ack_timeout": 0, 00:19:42.960 "ctrlr_loss_timeout_sec": 0, 00:19:42.960 "reconnect_delay_sec": 0, 00:19:42.960 "fast_io_fail_timeout_sec": 0, 00:19:42.960 "disable_auto_failback": false, 00:19:42.960 "generate_uuids": false, 00:19:42.960 "transport_tos": 0, 00:19:42.960 "nvme_error_stat": false, 00:19:42.960 "rdma_srq_size": 0, 00:19:42.960 "io_path_stat": false, 00:19:42.960 "allow_accel_sequence": false, 00:19:42.960 "rdma_max_cq_size": 0, 00:19:42.960 "rdma_cm_event_timeout_ms": 0, 00:19:42.960 "dhchap_digests": [ 00:19:42.960 "sha256", 00:19:42.960 "sha384", 00:19:42.960 "sha512" 00:19:42.960 ], 00:19:42.960 "dhchap_dhgroups": [ 00:19:42.960 "null", 00:19:42.960 "ffdhe2048", 00:19:42.960 "ffdhe3072", 00:19:42.960 "ffdhe4096", 00:19:42.960 "ffdhe6144", 00:19:42.960 "ffdhe8192" 00:19:42.960 ] 00:19:42.960 } 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "method": "bdev_nvme_attach_controller", 00:19:42.960 "params": { 00:19:42.960 "name": "nvme0", 00:19:42.960 "trtype": "TCP", 00:19:42.960 "adrfam": "IPv4", 00:19:42.960 "traddr": "10.0.0.3", 00:19:42.960 "trsvcid": "4420", 00:19:42.960 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.960 "prchk_reftag": false, 00:19:42.960 "prchk_guard": false, 00:19:42.960 "ctrlr_loss_timeout_sec": 0, 00:19:42.960 "reconnect_delay_sec": 0, 00:19:42.960 "fast_io_fail_timeout_sec": 0, 00:19:42.960 "psk": "key0", 00:19:42.960 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.960 "hdgst": false, 00:19:42.960 "ddgst": false, 00:19:42.960 "multipath": "multipath" 00:19:42.960 } 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "method": "bdev_nvme_set_hotplug", 00:19:42.960 "params": { 00:19:42.960 "period_us": 100000, 00:19:42.960 "enable": false 00:19:42.960 } 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "method": "bdev_enable_histogram", 00:19:42.960 "params": { 00:19:42.960 "name": "nvme0n1", 00:19:42.960 "enable": true 00:19:42.960 } 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "method": "bdev_wait_for_examine" 00:19:42.960 } 00:19:42.960 ] 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "subsystem": "nbd", 00:19:42.960 "config": [] 00:19:42.960 } 00:19:42.960 ] 00:19:42.960 }' 00:19:42.960 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 76056 00:19:42.960 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76056 ']' 00:19:42.960 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76056 00:19:42.960 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:42.960 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.960 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76056 00:19:42.960 killing process with pid 76056 00:19:42.960 Received shutdown signal, test time was about 1.000000 seconds 00:19:42.960 00:19:42.960 Latency(us) 00:19:42.960 [2024-12-10T11:22:49.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.960 [2024-12-10T11:22:49.786Z] =================================================================================================================== 00:19:42.960 [2024-12-10T11:22:49.787Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.961 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:42.961 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:42.961 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76056' 00:19:42.961 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76056 00:19:42.961 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76056 00:19:44.337 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 76023 00:19:44.337 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76023 ']' 00:19:44.337 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76023 00:19:44.337 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:44.337 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.337 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76023 00:19:44.337 killing process with pid 76023 00:19:44.337 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:44.337 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:44.337 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76023' 00:19:44.337 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76023 00:19:44.337 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76023 00:19:45.274 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:45.274 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:45.274 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:45.274 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.274 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:45.274 "subsystems": [ 00:19:45.274 { 00:19:45.274 "subsystem": "keyring", 00:19:45.274 "config": [ 00:19:45.274 { 00:19:45.274 "method": "keyring_file_add_key", 00:19:45.274 "params": { 00:19:45.274 "name": "key0", 00:19:45.274 "path": "/tmp/tmp.LB5BaTl8XS" 00:19:45.274 } 00:19:45.274 } 00:19:45.274 ] 00:19:45.274 }, 00:19:45.274 { 00:19:45.274 "subsystem": "iobuf", 00:19:45.274 "config": [ 00:19:45.274 { 00:19:45.274 "method": "iobuf_set_options", 00:19:45.274 "params": { 00:19:45.274 "small_pool_count": 8192, 00:19:45.274 "large_pool_count": 1024, 00:19:45.274 "small_bufsize": 8192, 00:19:45.274 "large_bufsize": 135168, 00:19:45.274 "enable_numa": false 00:19:45.274 } 00:19:45.274 } 00:19:45.274 ] 00:19:45.274 }, 00:19:45.274 { 00:19:45.274 "subsystem": "sock", 00:19:45.274 "config": [ 00:19:45.274 { 00:19:45.274 "method": "sock_set_default_impl", 00:19:45.274 "params": { 00:19:45.274 "impl_name": "uring" 00:19:45.274 } 00:19:45.274 }, 00:19:45.274 { 00:19:45.274 "method": "sock_impl_set_options", 00:19:45.274 "params": { 00:19:45.274 "impl_name": "ssl", 00:19:45.274 "recv_buf_size": 4096, 00:19:45.274 "send_buf_size": 4096, 00:19:45.274 "enable_recv_pipe": true, 00:19:45.274 "enable_quickack": false, 00:19:45.274 "enable_placement_id": 0, 00:19:45.274 "enable_zerocopy_send_server": true, 00:19:45.274 "enable_zerocopy_send_client": false, 00:19:45.274 "zerocopy_threshold": 0, 00:19:45.274 "tls_version": 0, 00:19:45.274 "enable_ktls": false 00:19:45.274 } 00:19:45.274 }, 00:19:45.274 { 00:19:45.274 "method": "sock_impl_set_options", 00:19:45.274 "params": { 00:19:45.274 "impl_name": "posix", 00:19:45.274 "recv_buf_size": 2097152, 00:19:45.274 "send_buf_size": 2097152, 00:19:45.274 "enable_recv_pipe": true, 00:19:45.274 "enable_quickack": false, 00:19:45.274 "enable_placement_id": 0, 00:19:45.274 "enable_zerocopy_send_server": true, 00:19:45.274 "enable_zerocopy_send_client": false, 00:19:45.274 "zerocopy_threshold": 0, 00:19:45.274 "tls_version": 0, 00:19:45.274 "enable_ktls": false 00:19:45.274 } 00:19:45.274 }, 00:19:45.274 { 00:19:45.274 "method": "sock_impl_set_options", 00:19:45.274 "params": { 00:19:45.274 "impl_name": "uring", 00:19:45.274 "recv_buf_size": 2097152, 00:19:45.274 "send_buf_size": 2097152, 00:19:45.274 "enable_recv_pipe": true, 00:19:45.274 "enable_quickack": false, 00:19:45.274 "enable_placement_id": 0, 00:19:45.274 "enable_zerocopy_send_server": false, 00:19:45.274 "enable_zerocopy_send_client": false, 00:19:45.274 "zerocopy_threshold": 0, 00:19:45.274 "tls_version": 0, 00:19:45.274 "enable_ktls": false 00:19:45.274 } 00:19:45.274 } 00:19:45.274 ] 00:19:45.274 }, 00:19:45.274 { 00:19:45.274 "subsystem": "vmd", 00:19:45.274 "config": [] 00:19:45.274 }, 00:19:45.274 { 00:19:45.274 "subsystem": "accel", 00:19:45.274 "config": [ 00:19:45.274 { 00:19:45.274 "method": "accel_set_options", 00:19:45.274 "params": { 00:19:45.274 "small_cache_size": 128, 00:19:45.274 "large_cache_size": 16, 00:19:45.274 "task_count": 2048, 00:19:45.274 "sequence_count": 2048, 00:19:45.274 "buf_count": 2048 00:19:45.274 } 00:19:45.274 } 00:19:45.274 ] 00:19:45.274 }, 00:19:45.274 { 00:19:45.274 "subsystem": "bdev", 00:19:45.274 "config": [ 00:19:45.274 { 00:19:45.274 "method": "bdev_set_options", 00:19:45.274 "params": { 00:19:45.274 "bdev_io_pool_size": 65535, 00:19:45.274 "bdev_io_cache_size": 256, 00:19:45.274 "bdev_auto_examine": true, 00:19:45.274 "iobuf_small_cache_size": 128, 00:19:45.274 "iobuf_large_cache_size": 16 00:19:45.274 } 00:19:45.274 }, 00:19:45.274 { 00:19:45.274 "method": "bdev_raid_set_options", 00:19:45.274 "params": { 00:19:45.274 "process_window_size_kb": 1024, 00:19:45.274 "process_max_bandwidth_mb_sec": 0 00:19:45.274 } 00:19:45.274 }, 00:19:45.274 { 00:19:45.274 "method": "bdev_iscsi_set_options", 00:19:45.274 "params": { 00:19:45.274 "timeout_sec": 30 00:19:45.274 } 00:19:45.274 }, 00:19:45.274 { 00:19:45.274 "method": "bdev_nvme_set_options", 00:19:45.274 "params": { 00:19:45.274 "action_on_timeout": "none", 00:19:45.274 "timeout_us": 0, 00:19:45.274 "timeout_admin_us": 0, 00:19:45.274 "keep_alive_timeout_ms": 10000, 00:19:45.274 "arbitration_burst": 0, 00:19:45.274 "low_priority_weight": 0, 00:19:45.274 "medium_priority_weight": 0, 00:19:45.274 "high_priority_weight": 0, 00:19:45.274 "nvme_adminq_poll_period_us": 10000, 00:19:45.274 "nvme_ioq_poll_period_us": 0, 00:19:45.274 "io_queue_requests": 0, 00:19:45.274 "delay_cmd_submit": true, 00:19:45.274 "transport_retry_count": 4, 00:19:45.274 "bdev_retry_count": 3, 00:19:45.274 "transport_ack_timeout": 0, 00:19:45.274 "ctrlr_loss_timeout_sec": 0, 00:19:45.274 "reconnect_delay_sec": 0, 00:19:45.274 "fast_io_fail_timeout_sec": 0, 00:19:45.274 "disable_auto_failback": false, 00:19:45.274 "generate_uuids": false, 00:19:45.274 "transport_tos": 0, 00:19:45.275 "nvme_error_stat": false, 00:19:45.275 "rdma_srq_size": 0, 00:19:45.275 "io_path_stat": false, 00:19:45.275 "allow_accel_sequence": false, 00:19:45.275 "rdma_max_cq_size": 0, 00:19:45.275 "rdma_cm_event_timeout_ms": 0, 00:19:45.275 "dhchap_digests": [ 00:19:45.275 "sha256", 00:19:45.275 "sha384", 00:19:45.275 "sha512" 00:19:45.275 ], 00:19:45.275 "dhchap_dhgroups": [ 00:19:45.275 "null", 00:19:45.275 "ffdhe2048", 00:19:45.275 "ffdhe3072", 00:19:45.275 "ffdhe4096", 00:19:45.275 "ffdhe6144", 00:19:45.275 "ffdhe8192" 00:19:45.275 ] 00:19:45.275 } 00:19:45.275 }, 00:19:45.275 { 00:19:45.275 "method": "bdev_nvme_set_hotplug", 00:19:45.275 "params": { 00:19:45.275 "period_us": 100000, 00:19:45.275 "enable": false 00:19:45.275 } 00:19:45.275 }, 00:19:45.275 { 00:19:45.275 "method": "bdev_malloc_create", 00:19:45.275 "params": { 00:19:45.275 "name": "malloc0", 00:19:45.275 "num_blocks": 8192, 00:19:45.275 "block_size": 4096, 00:19:45.275 "physical_block_size": 4096, 00:19:45.275 "uuid": "3b607e07-0633-4404-9348-1b34435794fa", 00:19:45.275 "optimal_io_boundary": 0, 00:19:45.275 "md_size": 0, 00:19:45.275 "dif_type": 0, 00:19:45.275 "dif_is_head_of_md": false, 00:19:45.275 "dif_pi_format": 0 00:19:45.275 } 00:19:45.275 }, 00:19:45.275 { 00:19:45.275 "method": "bdev_wait_for_examine" 00:19:45.275 } 00:19:45.275 ] 00:19:45.275 }, 00:19:45.275 { 00:19:45.275 "subsystem": "nbd", 00:19:45.275 "config": [] 00:19:45.275 }, 00:19:45.275 { 00:19:45.275 "subsystem": "scheduler", 00:19:45.275 "config": [ 00:19:45.275 { 00:19:45.275 "method": "framework_set_scheduler", 00:19:45.275 "params": { 00:19:45.275 "name": "static" 00:19:45.275 } 00:19:45.275 } 00:19:45.275 ] 00:19:45.275 }, 00:19:45.275 { 00:19:45.275 "subsystem": "nvmf", 00:19:45.275 "config": [ 00:19:45.275 { 00:19:45.275 "method": "nvmf_set_config", 00:19:45.275 "params": { 00:19:45.275 "discovery_filter": "match_any", 00:19:45.275 "admin_cmd_passthru": { 00:19:45.275 "identify_ctrlr": false 00:19:45.275 }, 00:19:45.275 "dhchap_digests": [ 00:19:45.275 "sha256", 00:19:45.275 "sha384", 00:19:45.275 "sha512" 00:19:45.275 ], 00:19:45.275 "dhchap_dhgroups": [ 00:19:45.275 "null", 00:19:45.275 "ffdhe2048", 00:19:45.275 "ffdhe3072", 00:19:45.275 "ffdhe4096", 00:19:45.275 "ffdhe6144", 00:19:45.275 "ffdhe8192" 00:19:45.275 ] 00:19:45.275 } 00:19:45.275 }, 00:19:45.275 { 00:19:45.275 "method": "nvmf_set_max_subsystems", 00:19:45.275 "params": { 00:19:45.275 "max_subsystems": 1024 00:19:45.275 } 00:19:45.275 }, 00:19:45.275 { 00:19:45.275 "method": "nvmf_set_crdt", 00:19:45.275 "params": { 00:19:45.275 "crdt1": 0, 00:19:45.275 "crdt2": 0, 00:19:45.275 "crdt3": 0 00:19:45.275 } 00:19:45.275 }, 00:19:45.275 { 00:19:45.275 "method": "nvmf_create_transport", 00:19:45.275 "params": { 00:19:45.275 "trtype": "TCP", 00:19:45.275 "max_queue_depth": 128, 00:19:45.275 "max_io_qpairs_per_ctrlr": 127, 00:19:45.275 "in_capsule_data_size": 4096, 00:19:45.275 "max_io_size": 131072, 00:19:45.275 "io_unit_size": 131072, 00:19:45.275 "max_aq_depth": 128, 00:19:45.275 "num_shared_buffers": 511, 00:19:45.275 "buf_cache_size": 4294967295, 00:19:45.275 "dif_insert_or_strip": false, 00:19:45.275 "zcopy": false, 00:19:45.275 "c2h_success": false, 00:19:45.275 "sock_priority": 0, 00:19:45.275 "abort_timeout_sec": 1, 00:19:45.275 "ack_timeout": 0, 00:19:45.275 "data_wr_pool_size": 0 00:19:45.275 } 00:19:45.275 }, 00:19:45.275 { 00:19:45.275 "method": "nvmf_create_subsystem", 00:19:45.275 "params": { 00:19:45.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.275 "allow_any_host": false, 00:19:45.275 "serial_number": "00000000000000000000", 00:19:45.275 "model_number": "SPDK bdev Controller", 00:19:45.275 "max_namespaces": 32, 00:19:45.275 "min_cntlid": 1, 00:19:45.275 "max_cntlid": 65519, 00:19:45.275 "ana_reporting": false 00:19:45.275 } 00:19:45.275 }, 00:19:45.275 { 00:19:45.275 "method": "nvmf_subsystem_add_host", 00:19:45.275 "params": { 00:19:45.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.275 "host": "nqn.2016-06.io.spdk:host1", 00:19:45.275 "psk": "key0" 00:19:45.275 } 00:19:45.275 }, 00:19:45.275 { 00:19:45.275 "method": "nvmf_subsystem_add_ns", 00:19:45.275 "params": { 00:19:45.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.275 "namespace": { 00:19:45.275 "nsid": 1, 00:19:45.275 "bdev_name": "malloc0", 00:19:45.275 "nguid": "3B607E070633440493481B34435794FA", 00:19:45.275 "uuid": "3b607e07-0633-4404-9348-1b34435794fa", 00:19:45.275 "no_auto_visible": false 00:19:45.275 } 00:19:45.275 } 00:19:45.275 }, 00:19:45.275 { 00:19:45.275 "method": "nvmf_subsystem_add_listener", 00:19:45.275 "params": { 00:19:45.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.275 "listen_address": { 00:19:45.275 "trtype": "TCP", 00:19:45.275 "adrfam": "IPv4", 00:19:45.275 "traddr": "10.0.0.3", 00:19:45.275 "trsvcid": "4420" 00:19:45.275 }, 00:19:45.275 "secure_channel": false, 00:19:45.275 "sock_impl": "ssl" 00:19:45.275 } 00:19:45.275 } 00:19:45.275 ] 00:19:45.275 } 00:19:45.275 ] 00:19:45.275 }' 00:19:45.275 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=76136 00:19:45.275 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:45.275 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 76136 00:19:45.275 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76136 ']' 00:19:45.275 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.275 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.275 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.275 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.275 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.275 [2024-12-10 11:22:52.030611] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:45.275 [2024-12-10 11:22:52.030780] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.534 [2024-12-10 11:22:52.215434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.534 [2024-12-10 11:22:52.340101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.534 [2024-12-10 11:22:52.340188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.534 [2024-12-10 11:22:52.340224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.534 [2024-12-10 11:22:52.340253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.534 [2024-12-10 11:22:52.340270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.534 [2024-12-10 11:22:52.341806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.101 [2024-12-10 11:22:52.647507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:46.101 [2024-12-10 11:22:52.823204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.101 [2024-12-10 11:22:52.855159] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:46.101 [2024-12-10 11:22:52.855474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=76168 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 76168 /var/tmp/bdevperf.sock 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76168 ']' 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:46.359 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:46.359 "subsystems": [ 00:19:46.359 { 00:19:46.359 "subsystem": "keyring", 00:19:46.359 "config": [ 00:19:46.359 { 00:19:46.359 "method": "keyring_file_add_key", 00:19:46.359 "params": { 00:19:46.359 "name": "key0", 00:19:46.359 "path": "/tmp/tmp.LB5BaTl8XS" 00:19:46.359 } 00:19:46.359 } 00:19:46.359 ] 00:19:46.359 }, 00:19:46.359 { 00:19:46.360 "subsystem": "iobuf", 00:19:46.360 "config": [ 00:19:46.360 { 00:19:46.360 "method": "iobuf_set_options", 00:19:46.360 "params": { 00:19:46.360 "small_pool_count": 8192, 00:19:46.360 "large_pool_count": 1024, 00:19:46.360 "small_bufsize": 8192, 00:19:46.360 "large_bufsize": 135168, 00:19:46.360 "enable_numa": false 00:19:46.360 } 00:19:46.360 } 00:19:46.360 ] 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "subsystem": "sock", 00:19:46.360 "config": [ 00:19:46.360 { 00:19:46.360 "method": "sock_set_default_impl", 00:19:46.360 "params": { 00:19:46.360 "impl_name": "uring" 00:19:46.360 } 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "method": "sock_impl_set_options", 00:19:46.360 "params": { 00:19:46.360 "impl_name": "ssl", 00:19:46.360 "recv_buf_size": 4096, 00:19:46.360 "send_buf_size": 4096, 00:19:46.360 "enable_recv_pipe": true, 00:19:46.360 "enable_quickack": false, 00:19:46.360 "enable_placement_id": 0, 00:19:46.360 "enable_zerocopy_send_server": true, 00:19:46.360 "enable_zerocopy_send_client": false, 00:19:46.360 "zerocopy_threshold": 0, 00:19:46.360 "tls_version": 0, 00:19:46.360 "enable_ktls": false 00:19:46.360 } 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "method": "sock_impl_set_options", 00:19:46.360 "params": { 00:19:46.360 "impl_name": "posix", 00:19:46.360 "recv_buf_size": 2097152, 00:19:46.360 "send_buf_size": 2097152, 00:19:46.360 "enable_recv_pipe": true, 00:19:46.360 "enable_quickack": false, 00:19:46.360 "enable_placement_id": 0, 00:19:46.360 "enable_zerocopy_send_server": true, 00:19:46.360 "enable_zerocopy_send_client": false, 00:19:46.360 "zerocopy_threshold": 0, 00:19:46.360 "tls_version": 0, 00:19:46.360 "enable_ktls": false 00:19:46.360 } 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "method": "sock_impl_set_options", 00:19:46.360 "params": { 00:19:46.360 "impl_name": "uring", 00:19:46.360 "recv_buf_size": 2097152, 00:19:46.360 "send_buf_size": 2097152, 00:19:46.360 "enable_recv_pipe": true, 00:19:46.360 "enable_quickack": false, 00:19:46.360 "enable_placement_id": 0, 00:19:46.360 "enable_zerocopy_send_server": false, 00:19:46.360 "enable_zerocopy_send_client": false, 00:19:46.360 "zerocopy_threshold": 0, 00:19:46.360 "tls_version": 0, 00:19:46.360 "enable_ktls": false 00:19:46.360 } 00:19:46.360 } 00:19:46.360 ] 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "subsystem": "vmd", 00:19:46.360 "config": [] 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "subsystem": "accel", 00:19:46.360 "config": [ 00:19:46.360 { 00:19:46.360 "method": "accel_set_options", 00:19:46.360 "params": { 00:19:46.360 "small_cache_size": 128, 00:19:46.360 "large_cache_size": 16, 00:19:46.360 "task_count": 2048, 00:19:46.360 "sequence_count": 2048, 00:19:46.360 "buf_count": 2048 00:19:46.360 } 00:19:46.360 } 00:19:46.360 ] 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "subsystem": "bdev", 00:19:46.360 "config": [ 00:19:46.360 { 00:19:46.360 "method": "bdev_set_options", 00:19:46.360 "params": { 00:19:46.360 "bdev_io_pool_size": 65535, 00:19:46.360 "bdev_io_cache_size": 256, 00:19:46.360 "bdev_auto_examine": true, 00:19:46.360 "iobuf_small_cache_size": 128, 00:19:46.360 "iobuf_large_cache_size": 16 00:19:46.360 } 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "method": "bdev_raid_set_options", 00:19:46.360 "params": { 00:19:46.360 "process_window_size_kb": 1024, 00:19:46.360 "process_max_bandwidth_mb_sec": 0 00:19:46.360 } 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "method": "bdev_iscsi_set_options", 00:19:46.360 "params": { 00:19:46.360 "timeout_sec": 30 00:19:46.360 } 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "method": "bdev_nvme_set_options", 00:19:46.360 "params": { 00:19:46.360 "action_on_timeout": "none", 00:19:46.360 "timeout_us": 0, 00:19:46.360 "timeout_admin_us": 0, 00:19:46.360 "keep_alive_timeout_ms": 10000, 00:19:46.360 "arbitration_burst": 0, 00:19:46.360 "low_priority_weight": 0, 00:19:46.360 "medium_priority_weight": 0, 00:19:46.360 "high_priority_weight": 0, 00:19:46.360 "nvme_adminq_poll_period_us": 10000, 00:19:46.360 "nvme_ioq_poll_period_us": 0, 00:19:46.360 "io_queue_requests": 512, 00:19:46.360 "delay_cmd_submit": true, 00:19:46.360 "transport_retry_count": 4, 00:19:46.360 "bdev_retry_count": 3, 00:19:46.360 "transport_ack_timeout": 0, 00:19:46.360 "ctrlr_loss_timeout_sec": 0, 00:19:46.360 "reconnect_delay_sec": 0, 00:19:46.360 "fast_io_fail_timeout_sec": 0, 00:19:46.360 "disable_auto_failback": false, 00:19:46.360 "generate_uuids": false, 00:19:46.360 "transport_tos": 0, 00:19:46.360 "nvme_error_stat": false, 00:19:46.360 "rdma_srq_size": 0, 00:19:46.360 "io_path_stat": false, 00:19:46.360 "allow_accel_sequence": false, 00:19:46.360 "rdma_max_cq_size": 0, 00:19:46.360 "rdma_cm_event_timeout_ms": 0, 00:19:46.360 "dhchap_digests": [ 00:19:46.360 "sha256", 00:19:46.360 "sha384", 00:19:46.360 "sha512" 00:19:46.360 ], 00:19:46.360 "dhchap_dhgroups": [ 00:19:46.360 "null", 00:19:46.360 "ffdhe2048", 00:19:46.360 "ffdhe3072", 00:19:46.360 "ffdhe4096", 00:19:46.360 "ffdhe6144", 00:19:46.360 "ffdhe8192" 00:19:46.360 ] 00:19:46.360 } 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "method": "bdev_nvme_attach_controller", 00:19:46.360 "params": { 00:19:46.360 "name": "nvme0", 00:19:46.360 "trtype": "TCP", 00:19:46.360 "adrfam": "IPv4", 00:19:46.360 "traddr": "10.0.0.3", 00:19:46.360 "trsvcid": "4420", 00:19:46.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.360 "prchk_reftag": false, 00:19:46.360 "prchk_guard": false, 00:19:46.360 "ctrlr_loss_timeout_sec": 0, 00:19:46.360 "reconnect_delay_sec": 0, 00:19:46.360 "fast_io_fail_timeout_sec": 0, 00:19:46.360 "psk": "key0", 00:19:46.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.360 "hdgst": false, 00:19:46.360 "ddgst": false, 00:19:46.360 "multipath": "multipath" 00:19:46.360 } 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "method": "bdev_nvme_set_hotplug", 00:19:46.360 "params": { 00:19:46.360 "period_us": 100000, 00:19:46.360 "enable": false 00:19:46.360 } 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "method": "bdev_enable_histogram", 00:19:46.360 "params": { 00:19:46.360 "name": "nvme0n1", 00:19:46.360 "enable": true 00:19:46.360 } 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "method": "bdev_wait_for_examine" 00:19:46.360 } 00:19:46.360 ] 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "subsystem": "nbd", 00:19:46.360 "config": [] 00:19:46.360 } 00:19:46.360 ] 00:19:46.360 }' 00:19:46.619 [2024-12-10 11:22:53.194395] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:46.619 [2024-12-10 11:22:53.194566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76168 ] 00:19:46.619 [2024-12-10 11:22:53.379289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.878 [2024-12-10 11:22:53.503527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.136 [2024-12-10 11:22:53.774833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:47.136 [2024-12-10 11:22:53.898411] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.395 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.395 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:47.653 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:47.653 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:47.653 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.653 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:47.911 Running I/O for 1 seconds... 00:19:49.102 2742.00 IOPS, 10.71 MiB/s 00:19:49.102 Latency(us) 00:19:49.102 [2024-12-10T11:22:55.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.102 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:49.102 Verification LBA range: start 0x0 length 0x2000 00:19:49.102 nvme0n1 : 1.03 2776.59 10.85 0.00 0.00 45217.42 8579.26 29074.15 00:19:49.102 [2024-12-10T11:22:55.928Z] =================================================================================================================== 00:19:49.102 [2024-12-10T11:22:55.928Z] Total : 2776.59 10.85 0.00 0.00 45217.42 8579.26 29074.15 00:19:49.102 { 00:19:49.102 "results": [ 00:19:49.102 { 00:19:49.102 "job": "nvme0n1", 00:19:49.102 "core_mask": "0x2", 00:19:49.102 "workload": "verify", 00:19:49.102 "status": "finished", 00:19:49.102 "verify_range": { 00:19:49.102 "start": 0, 00:19:49.102 "length": 8192 00:19:49.102 }, 00:19:49.102 "queue_depth": 128, 00:19:49.102 "io_size": 4096, 00:19:49.102 "runtime": 1.033641, 00:19:49.102 "iops": 2776.59264677001, 00:19:49.102 "mibps": 10.846065026445352, 00:19:49.102 "io_failed": 0, 00:19:49.102 "io_timeout": 0, 00:19:49.102 "avg_latency_us": 45217.421582515046, 00:19:49.102 "min_latency_us": 8579.258181818182, 00:19:49.102 "max_latency_us": 29074.15272727273 00:19:49.102 } 00:19:49.102 ], 00:19:49.102 "core_count": 1 00:19:49.102 } 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:49.102 nvmf_trace.0 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 76168 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76168 ']' 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76168 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76168 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:49.102 killing process with pid 76168 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76168' 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76168 00:19:49.102 Received shutdown signal, test time was about 1.000000 seconds 00:19:49.102 00:19:49.102 Latency(us) 00:19:49.102 [2024-12-10T11:22:55.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.102 [2024-12-10T11:22:55.928Z] =================================================================================================================== 00:19:49.102 [2024-12-10T11:22:55.928Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:49.102 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76168 00:19:50.037 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:50.037 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:50.038 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:50.038 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:50.038 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:50.038 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:50.038 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:50.038 rmmod nvme_tcp 00:19:50.038 rmmod nvme_fabrics 00:19:50.038 rmmod nvme_keyring 00:19:50.038 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:50.038 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:50.038 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:50.038 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 76136 ']' 00:19:50.038 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 76136 00:19:50.038 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76136 ']' 00:19:50.038 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76136 00:19:50.038 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:50.323 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.323 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76136 00:19:50.323 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:50.323 killing process with pid 76136 00:19:50.323 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:50.323 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76136' 00:19:50.323 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76136 00:19:50.323 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76136 00:19:51.264 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:51.264 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:51.264 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:51.264 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:51.264 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:51.264 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:51.264 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:51.264 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:51.264 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:51.264 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:51.264 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:51.264 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:51.264 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:51.264 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:51.264 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:51.264 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:51.264 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:51.264 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.AgdskNEoHq /tmp/tmp.XPjE3g3Nou /tmp/tmp.LB5BaTl8XS 00:19:51.524 00:19:51.524 real 1m51.909s 00:19:51.524 user 3m7.502s 00:19:51.524 sys 0m26.661s 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.524 ************************************ 00:19:51.524 END TEST nvmf_tls 00:19:51.524 ************************************ 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:51.524 ************************************ 00:19:51.524 START TEST nvmf_fips 00:19:51.524 ************************************ 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:51.524 * Looking for test storage... 00:19:51.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:19:51.524 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:51.783 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.784 --rc genhtml_branch_coverage=1 00:19:51.784 --rc genhtml_function_coverage=1 00:19:51.784 --rc genhtml_legend=1 00:19:51.784 --rc geninfo_all_blocks=1 00:19:51.784 --rc geninfo_unexecuted_blocks=1 00:19:51.784 00:19:51.784 ' 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.784 --rc genhtml_branch_coverage=1 00:19:51.784 --rc genhtml_function_coverage=1 00:19:51.784 --rc genhtml_legend=1 00:19:51.784 --rc geninfo_all_blocks=1 00:19:51.784 --rc geninfo_unexecuted_blocks=1 00:19:51.784 00:19:51.784 ' 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.784 --rc genhtml_branch_coverage=1 00:19:51.784 --rc genhtml_function_coverage=1 00:19:51.784 --rc genhtml_legend=1 00:19:51.784 --rc geninfo_all_blocks=1 00:19:51.784 --rc geninfo_unexecuted_blocks=1 00:19:51.784 00:19:51.784 ' 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.784 --rc genhtml_branch_coverage=1 00:19:51.784 --rc genhtml_function_coverage=1 00:19:51.784 --rc genhtml_legend=1 00:19:51.784 --rc geninfo_all_blocks=1 00:19:51.784 --rc geninfo_unexecuted_blocks=1 00:19:51.784 00:19:51.784 ' 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:51.784 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.784 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:51.785 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:52.044 Error setting digest 00:19:52.044 4002848D7F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:52.044 4002848D7F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:52.044 Cannot find device "nvmf_init_br" 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:52.044 Cannot find device "nvmf_init_br2" 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:52.044 Cannot find device "nvmf_tgt_br" 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:52.044 Cannot find device "nvmf_tgt_br2" 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:52.044 Cannot find device "nvmf_init_br" 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:52.044 Cannot find device "nvmf_init_br2" 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:52.044 Cannot find device "nvmf_tgt_br" 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:52.044 Cannot find device "nvmf_tgt_br2" 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:52.044 Cannot find device "nvmf_br" 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:52.044 Cannot find device "nvmf_init_if" 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:52.044 Cannot find device "nvmf_init_if2" 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:52.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:52.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:19:52.044 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:52.045 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:52.045 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:52.045 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:52.045 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:52.045 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:52.045 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:52.045 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:52.045 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:52.045 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:52.045 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:52.045 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:52.303 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:52.304 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:52.304 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:52.304 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:52.304 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:52.304 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:19:52.304 00:19:52.304 --- 10.0.0.3 ping statistics --- 00:19:52.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.304 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:52.304 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:52.304 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:52.304 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:19:52.304 00:19:52.304 --- 10.0.0.4 ping statistics --- 00:19:52.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.304 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:52.304 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:52.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:52.304 00:19:52.304 --- 10.0.0.1 ping statistics --- 00:19:52.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.304 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:52.304 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:52.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:19:52.304 00:19:52.304 --- 10.0.0.2 ping statistics --- 00:19:52.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.304 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:52.304 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.304 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:19:52.304 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:52.304 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.304 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:52.304 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:52.304 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.304 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:52.304 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:52.304 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:52.304 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.304 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.304 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:52.304 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=76508 00:19:52.304 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:52.304 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 76508 00:19:52.304 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 76508 ']' 00:19:52.304 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.304 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.304 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.304 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.304 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:52.562 [2024-12-10 11:22:59.163903] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:52.562 [2024-12-10 11:22:59.164055] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.562 [2024-12-10 11:22:59.345248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.821 [2024-12-10 11:22:59.470660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.821 [2024-12-10 11:22:59.470739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.821 [2024-12-10 11:22:59.470763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.821 [2024-12-10 11:22:59.470786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.821 [2024-12-10 11:22:59.470803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.821 [2024-12-10 11:22:59.472433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.079 [2024-12-10 11:22:59.691245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:53.337 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.337 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:53.338 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.338 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.338 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:53.338 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.338 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:53.338 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:53.338 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:53.338 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.IWq 00:19:53.338 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:53.338 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.IWq 00:19:53.338 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.IWq 00:19:53.338 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.IWq 00:19:53.338 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:53.596 [2024-12-10 11:23:00.413353] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.854 [2024-12-10 11:23:00.429288] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.854 [2024-12-10 11:23:00.429613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:53.854 malloc0 00:19:53.854 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:53.854 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=76550 00:19:53.854 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:53.854 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 76550 /var/tmp/bdevperf.sock 00:19:53.854 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 76550 ']' 00:19:53.854 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.854 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.854 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.854 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.854 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:54.112 [2024-12-10 11:23:00.689527] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:19:54.112 [2024-12-10 11:23:00.689675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76550 ] 00:19:54.112 [2024-12-10 11:23:00.863794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.369 [2024-12-10 11:23:00.984176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.369 [2024-12-10 11:23:01.167363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:54.936 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.936 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:54.937 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.IWq 00:19:55.197 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:55.455 [2024-12-10 11:23:02.253275] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.714 TLSTESTn1 00:19:55.714 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:55.714 Running I/O for 10 seconds... 00:19:58.023 2746.00 IOPS, 10.73 MiB/s [2024-12-10T11:23:05.782Z] 2810.50 IOPS, 10.98 MiB/s [2024-12-10T11:23:06.718Z] 2826.67 IOPS, 11.04 MiB/s [2024-12-10T11:23:07.651Z] 2861.75 IOPS, 11.18 MiB/s [2024-12-10T11:23:08.611Z] 2885.60 IOPS, 11.27 MiB/s [2024-12-10T11:23:09.546Z] 2899.50 IOPS, 11.33 MiB/s [2024-12-10T11:23:10.920Z] 2909.71 IOPS, 11.37 MiB/s [2024-12-10T11:23:11.855Z] 2915.00 IOPS, 11.39 MiB/s [2024-12-10T11:23:12.790Z] 2920.33 IOPS, 11.41 MiB/s [2024-12-10T11:23:12.790Z] 2925.90 IOPS, 11.43 MiB/s 00:20:05.964 Latency(us) 00:20:05.964 [2024-12-10T11:23:12.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.964 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:05.964 Verification LBA range: start 0x0 length 0x2000 00:20:05.964 TLSTESTn1 : 10.02 2931.61 11.45 0.00 0.00 43575.13 8817.57 30384.87 00:20:05.964 [2024-12-10T11:23:12.790Z] =================================================================================================================== 00:20:05.964 [2024-12-10T11:23:12.790Z] Total : 2931.61 11.45 0.00 0.00 43575.13 8817.57 30384.87 00:20:05.964 { 00:20:05.964 "results": [ 00:20:05.964 { 00:20:05.964 "job": "TLSTESTn1", 00:20:05.964 "core_mask": "0x4", 00:20:05.964 "workload": "verify", 00:20:05.964 "status": "finished", 00:20:05.964 "verify_range": { 00:20:05.964 "start": 0, 00:20:05.964 "length": 8192 00:20:05.964 }, 00:20:05.964 "queue_depth": 128, 00:20:05.964 "io_size": 4096, 00:20:05.964 "runtime": 10.023502, 00:20:05.964 "iops": 2931.610129872773, 00:20:05.964 "mibps": 11.45160206981552, 00:20:05.964 "io_failed": 0, 00:20:05.964 "io_timeout": 0, 00:20:05.964 "avg_latency_us": 43575.126355004875, 00:20:05.964 "min_latency_us": 8817.57090909091, 00:20:05.964 "max_latency_us": 30384.872727272726 00:20:05.964 } 00:20:05.964 ], 00:20:05.964 "core_count": 1 00:20:05.964 } 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:05.964 nvmf_trace.0 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 76550 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 76550 ']' 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 76550 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76550 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:05.964 killing process with pid 76550 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76550' 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 76550 00:20:05.964 Received shutdown signal, test time was about 10.000000 seconds 00:20:05.964 00:20:05.964 Latency(us) 00:20:05.964 [2024-12-10T11:23:12.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.964 [2024-12-10T11:23:12.790Z] =================================================================================================================== 00:20:05.964 [2024-12-10T11:23:12.790Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.964 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 76550 00:20:06.898 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:06.898 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:06.898 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:07.156 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:07.156 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:07.156 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:07.156 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:07.156 rmmod nvme_tcp 00:20:07.156 rmmod nvme_fabrics 00:20:07.156 rmmod nvme_keyring 00:20:07.156 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:07.156 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:07.156 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:07.156 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 76508 ']' 00:20:07.156 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 76508 00:20:07.157 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 76508 ']' 00:20:07.157 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 76508 00:20:07.157 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:07.157 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.157 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76508 00:20:07.157 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:07.157 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:07.157 killing process with pid 76508 00:20:07.157 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76508' 00:20:07.157 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 76508 00:20:07.157 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 76508 00:20:08.533 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:08.533 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:08.533 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:08.533 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:08.533 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:08.533 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:08.533 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:08.533 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:08.533 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:08.533 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:08.533 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:08.533 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.IWq 00:20:08.533 00:20:08.533 real 0m16.926s 00:20:08.533 user 0m24.780s 00:20:08.533 sys 0m5.425s 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.533 ************************************ 00:20:08.533 END TEST nvmf_fips 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:08.533 ************************************ 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:08.533 ************************************ 00:20:08.533 START TEST nvmf_control_msg_list 00:20:08.533 ************************************ 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:08.533 * Looking for test storage... 00:20:08.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:20:08.533 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:08.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.792 --rc genhtml_branch_coverage=1 00:20:08.792 --rc genhtml_function_coverage=1 00:20:08.792 --rc genhtml_legend=1 00:20:08.792 --rc geninfo_all_blocks=1 00:20:08.792 --rc geninfo_unexecuted_blocks=1 00:20:08.792 00:20:08.792 ' 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:08.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.792 --rc genhtml_branch_coverage=1 00:20:08.792 --rc genhtml_function_coverage=1 00:20:08.792 --rc genhtml_legend=1 00:20:08.792 --rc geninfo_all_blocks=1 00:20:08.792 --rc geninfo_unexecuted_blocks=1 00:20:08.792 00:20:08.792 ' 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:08.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.792 --rc genhtml_branch_coverage=1 00:20:08.792 --rc genhtml_function_coverage=1 00:20:08.792 --rc genhtml_legend=1 00:20:08.792 --rc geninfo_all_blocks=1 00:20:08.792 --rc geninfo_unexecuted_blocks=1 00:20:08.792 00:20:08.792 ' 00:20:08.792 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:08.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.792 --rc genhtml_branch_coverage=1 00:20:08.792 --rc genhtml_function_coverage=1 00:20:08.792 --rc genhtml_legend=1 00:20:08.792 --rc geninfo_all_blocks=1 00:20:08.792 --rc geninfo_unexecuted_blocks=1 00:20:08.792 00:20:08.792 ' 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:08.793 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:08.793 Cannot find device "nvmf_init_br" 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:08.793 Cannot find device "nvmf_init_br2" 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:08.793 Cannot find device "nvmf_tgt_br" 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:08.793 Cannot find device "nvmf_tgt_br2" 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:08.793 Cannot find device "nvmf_init_br" 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:08.793 Cannot find device "nvmf_init_br2" 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:08.793 Cannot find device "nvmf_tgt_br" 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:08.793 Cannot find device "nvmf_tgt_br2" 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:08.793 Cannot find device "nvmf_br" 00:20:08.793 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:20:08.794 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:08.794 Cannot find device "nvmf_init_if" 00:20:08.794 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:20:08.794 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:09.052 Cannot find device "nvmf_init_if2" 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:09.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:09.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:09.052 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:09.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:09.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:20:09.053 00:20:09.053 --- 10.0.0.3 ping statistics --- 00:20:09.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.053 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:09.053 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:09.053 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:20:09.053 00:20:09.053 --- 10.0.0.4 ping statistics --- 00:20:09.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.053 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:09.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:09.053 00:20:09.053 --- 10.0.0.1 ping statistics --- 00:20:09.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.053 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:09.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:20:09.053 00:20:09.053 --- 10.0.0.2 ping statistics --- 00:20:09.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.053 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.053 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:09.311 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=76955 00:20:09.311 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 76955 00:20:09.311 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:09.311 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 76955 ']' 00:20:09.311 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.311 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.311 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.311 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.311 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:09.311 [2024-12-10 11:23:15.997987] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:20:09.311 [2024-12-10 11:23:15.998177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.569 [2024-12-10 11:23:16.193079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.569 [2024-12-10 11:23:16.319994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.569 [2024-12-10 11:23:16.320070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.569 [2024-12-10 11:23:16.320096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.569 [2024-12-10 11:23:16.320135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.569 [2024-12-10 11:23:16.320169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.569 [2024-12-10 11:23:16.321617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.827 [2024-12-10 11:23:16.511938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:10.395 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.395 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:10.395 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:10.395 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:10.395 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:10.395 [2024-12-10 11:23:17.046077] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:10.395 Malloc0 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:10.395 [2024-12-10 11:23:17.111706] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=76987 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=76988 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=76989 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 76987 00:20:10.395 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:10.653 [2024-12-10 11:23:17.367155] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:10.653 [2024-12-10 11:23:17.367503] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:10.653 [2024-12-10 11:23:17.377825] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:11.589 Initializing NVMe Controllers 00:20:11.589 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:11.589 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:11.589 Initialization complete. Launching workers. 00:20:11.589 ======================================================== 00:20:11.589 Latency(us) 00:20:11.589 Device Information : IOPS MiB/s Average min max 00:20:11.589 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2497.00 9.75 399.82 195.95 1568.80 00:20:11.589 ======================================================== 00:20:11.589 Total : 2497.00 9.75 399.82 195.95 1568.80 00:20:11.589 00:20:11.589 Initializing NVMe Controllers 00:20:11.589 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:11.589 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:11.589 Initialization complete. Launching workers. 00:20:11.589 ======================================================== 00:20:11.589 Latency(us) 00:20:11.589 Device Information : IOPS MiB/s Average min max 00:20:11.589 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2554.00 9.98 390.92 235.78 810.26 00:20:11.589 ======================================================== 00:20:11.589 Total : 2554.00 9.98 390.92 235.78 810.26 00:20:11.589 00:20:11.589 Initializing NVMe Controllers 00:20:11.589 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:11.589 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:11.589 Initialization complete. Launching workers. 00:20:11.589 ======================================================== 00:20:11.589 Latency(us) 00:20:11.589 Device Information : IOPS MiB/s Average min max 00:20:11.589 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2585.85 10.10 386.08 225.61 720.35 00:20:11.589 ======================================================== 00:20:11.589 Total : 2585.85 10.10 386.08 225.61 720.35 00:20:11.589 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 76988 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 76989 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:11.854 rmmod nvme_tcp 00:20:11.854 rmmod nvme_fabrics 00:20:11.854 rmmod nvme_keyring 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 76955 ']' 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 76955 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 76955 ']' 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 76955 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76955 00:20:11.854 killing process with pid 76955 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76955' 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 76955 00:20:11.854 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 76955 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:20:13.243 00:20:13.243 real 0m4.678s 00:20:13.243 user 0m6.967s 00:20:13.243 sys 0m1.535s 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.243 ************************************ 00:20:13.243 END TEST nvmf_control_msg_list 00:20:13.243 ************************************ 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:13.243 ************************************ 00:20:13.243 START TEST nvmf_wait_for_buf 00:20:13.243 ************************************ 00:20:13.243 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:13.243 * Looking for test storage... 00:20:13.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:13.243 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:13.243 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:13.243 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:13.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.503 --rc genhtml_branch_coverage=1 00:20:13.503 --rc genhtml_function_coverage=1 00:20:13.503 --rc genhtml_legend=1 00:20:13.503 --rc geninfo_all_blocks=1 00:20:13.503 --rc geninfo_unexecuted_blocks=1 00:20:13.503 00:20:13.503 ' 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:13.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.503 --rc genhtml_branch_coverage=1 00:20:13.503 --rc genhtml_function_coverage=1 00:20:13.503 --rc genhtml_legend=1 00:20:13.503 --rc geninfo_all_blocks=1 00:20:13.503 --rc geninfo_unexecuted_blocks=1 00:20:13.503 00:20:13.503 ' 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:13.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.503 --rc genhtml_branch_coverage=1 00:20:13.503 --rc genhtml_function_coverage=1 00:20:13.503 --rc genhtml_legend=1 00:20:13.503 --rc geninfo_all_blocks=1 00:20:13.503 --rc geninfo_unexecuted_blocks=1 00:20:13.503 00:20:13.503 ' 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:13.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.503 --rc genhtml_branch_coverage=1 00:20:13.503 --rc genhtml_function_coverage=1 00:20:13.503 --rc genhtml_legend=1 00:20:13.503 --rc geninfo_all_blocks=1 00:20:13.503 --rc geninfo_unexecuted_blocks=1 00:20:13.503 00:20:13.503 ' 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.503 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:13.504 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:13.504 Cannot find device "nvmf_init_br" 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:13.504 Cannot find device "nvmf_init_br2" 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:13.504 Cannot find device "nvmf_tgt_br" 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:13.504 Cannot find device "nvmf_tgt_br2" 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:13.504 Cannot find device "nvmf_init_br" 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:13.504 Cannot find device "nvmf_init_br2" 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:13.504 Cannot find device "nvmf_tgt_br" 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:13.504 Cannot find device "nvmf_tgt_br2" 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:13.504 Cannot find device "nvmf_br" 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:13.504 Cannot find device "nvmf_init_if" 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:13.504 Cannot find device "nvmf_init_if2" 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:13.504 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:20:13.504 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:13.762 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:13.762 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:13.762 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:20:13.762 00:20:13.762 --- 10.0.0.3 ping statistics --- 00:20:13.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.762 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:13.762 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:13.762 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:20:13.762 00:20:13.762 --- 10.0.0.4 ping statistics --- 00:20:13.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.762 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:13.762 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:14.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:20:14.021 00:20:14.021 --- 10.0.0.1 ping statistics --- 00:20:14.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.021 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:14.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:20:14.021 00:20:14.021 --- 10.0.0.2 ping statistics --- 00:20:14.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.021 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=77240 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 77240 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 77240 ']' 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.021 11:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:14.021 [2024-12-10 11:23:20.762824] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:20:14.021 [2024-12-10 11:23:20.762997] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.279 [2024-12-10 11:23:20.949999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.279 [2024-12-10 11:23:21.075322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.279 [2024-12-10 11:23:21.075423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.279 [2024-12-10 11:23:21.075458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.279 [2024-12-10 11:23:21.075486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.279 [2024-12-10 11:23:21.075507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.279 [2024-12-10 11:23:21.076940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.213 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:15.213 [2024-12-10 11:23:21.916137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:15.213 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.214 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:15.214 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.214 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:15.472 Malloc0 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:15.472 [2024-12-10 11:23:22.079943] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:15.472 [2024-12-10 11:23:22.104165] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.472 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:15.730 [2024-12-10 11:23:22.369578] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:17.106 Initializing NVMe Controllers 00:20:17.106 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:17.106 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:17.106 Initialization complete. Launching workers. 00:20:17.106 ======================================================== 00:20:17.106 Latency(us) 00:20:17.106 Device Information : IOPS MiB/s Average min max 00:20:17.106 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 504.00 63.00 7983.38 6996.86 8234.51 00:20:17.106 ======================================================== 00:20:17.106 Total : 504.00 63.00 7983.38 6996.86 8234.51 00:20:17.106 00:20:17.106 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:17.106 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:17.106 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.106 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:17.106 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.106 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:20:17.106 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:20:17.106 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:17.106 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:17.106 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:17.106 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:17.106 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:17.106 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:17.106 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:17.106 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:17.107 rmmod nvme_tcp 00:20:17.107 rmmod nvme_fabrics 00:20:17.107 rmmod nvme_keyring 00:20:17.107 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:17.107 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:17.107 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:17.107 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 77240 ']' 00:20:17.107 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 77240 00:20:17.107 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 77240 ']' 00:20:17.107 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 77240 00:20:17.107 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:17.107 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.107 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77240 00:20:17.107 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.107 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.107 killing process with pid 77240 00:20:17.107 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77240' 00:20:17.107 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 77240 00:20:17.107 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 77240 00:20:18.042 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:18.042 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:18.042 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:18.042 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:18.042 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:18.042 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:18.042 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:18.042 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:18.042 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:18.042 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:18.042 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:18.042 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:18.301 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:18.301 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:18.301 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:18.301 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:18.301 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:18.301 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:18.301 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:18.301 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:18.301 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:18.301 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:18.301 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:18.301 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.301 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:18.301 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.301 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:20:18.301 00:20:18.301 real 0m5.115s 00:20:18.301 user 0m4.621s 00:20:18.301 sys 0m0.936s 00:20:18.301 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.301 ************************************ 00:20:18.301 END TEST nvmf_wait_for_buf 00:20:18.301 ************************************ 00:20:18.301 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:18.301 11:23:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:20:18.301 11:23:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:18.301 11:23:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:18.301 11:23:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.301 11:23:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:18.560 ************************************ 00:20:18.560 START TEST nvmf_fuzz 00:20:18.560 ************************************ 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:18.560 * Looking for test storage... 00:20:18.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:18.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.560 --rc genhtml_branch_coverage=1 00:20:18.560 --rc genhtml_function_coverage=1 00:20:18.560 --rc genhtml_legend=1 00:20:18.560 --rc geninfo_all_blocks=1 00:20:18.560 --rc geninfo_unexecuted_blocks=1 00:20:18.560 00:20:18.560 ' 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:18.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.560 --rc genhtml_branch_coverage=1 00:20:18.560 --rc genhtml_function_coverage=1 00:20:18.560 --rc genhtml_legend=1 00:20:18.560 --rc geninfo_all_blocks=1 00:20:18.560 --rc geninfo_unexecuted_blocks=1 00:20:18.560 00:20:18.560 ' 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:18.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.560 --rc genhtml_branch_coverage=1 00:20:18.560 --rc genhtml_function_coverage=1 00:20:18.560 --rc genhtml_legend=1 00:20:18.560 --rc geninfo_all_blocks=1 00:20:18.560 --rc geninfo_unexecuted_blocks=1 00:20:18.560 00:20:18.560 ' 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:18.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.560 --rc genhtml_branch_coverage=1 00:20:18.560 --rc genhtml_function_coverage=1 00:20:18.560 --rc genhtml_legend=1 00:20:18.560 --rc geninfo_all_blocks=1 00:20:18.560 --rc geninfo_unexecuted_blocks=1 00:20:18.560 00:20:18.560 ' 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:18.560 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:18.560 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:18.561 Cannot find device "nvmf_init_br" 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:18.561 Cannot find device "nvmf_init_br2" 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:18.561 Cannot find device "nvmf_tgt_br" 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:20:18.561 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:18.819 Cannot find device "nvmf_tgt_br2" 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:18.819 Cannot find device "nvmf_init_br" 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:18.819 Cannot find device "nvmf_init_br2" 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:18.819 Cannot find device "nvmf_tgt_br" 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:18.819 Cannot find device "nvmf_tgt_br2" 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:18.819 Cannot find device "nvmf_br" 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:18.819 Cannot find device "nvmf_init_if" 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:18.819 Cannot find device "nvmf_init_if2" 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:18.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:18.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:18.819 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:19.078 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:19.078 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:20:19.078 00:20:19.078 --- 10.0.0.3 ping statistics --- 00:20:19.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.078 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:19.078 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:19.078 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:20:19.078 00:20:19.078 --- 10.0.0.4 ping statistics --- 00:20:19.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.078 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:19.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:20:19.078 00:20:19.078 --- 10.0.0.1 ping statistics --- 00:20:19.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.078 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:19.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:20:19.078 00:20:19.078 --- 10.0.0.2 ping statistics --- 00:20:19.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.078 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.078 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:19.079 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:19.079 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.079 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:19.079 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:19.079 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=77546 00:20:19.079 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:19.079 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:19.079 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 77546 00:20:19.079 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 77546 ']' 00:20:19.079 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.079 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.079 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.079 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.079 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:20.454 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.454 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:20:20.454 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:20.454 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.454 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:20.454 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.454 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:20.454 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.454 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:20.455 Malloc0 00:20:20.455 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.455 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:20.455 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.455 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:20.455 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.455 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:20.455 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.455 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:20.455 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.455 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:20.455 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.455 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:20.455 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.455 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:20:20.455 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:20:21.021 Shutting down the fuzz application 00:20:21.021 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:20:21.590 Shutting down the fuzz application 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:21.590 rmmod nvme_tcp 00:20:21.590 rmmod nvme_fabrics 00:20:21.590 rmmod nvme_keyring 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 77546 ']' 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 77546 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 77546 ']' 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 77546 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77546 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:21.590 killing process with pid 77546 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77546' 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 77546 00:20:21.590 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 77546 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.032 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.033 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.033 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:20:23.033 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:20:23.033 00:20:23.033 real 0m4.657s 00:20:23.033 user 0m5.110s 00:20:23.033 sys 0m0.898s 00:20:23.033 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.033 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:23.033 ************************************ 00:20:23.033 END TEST nvmf_fuzz 00:20:23.033 ************************************ 00:20:23.299 11:23:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:23.299 11:23:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:23.299 11:23:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:23.299 11:23:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:23.299 ************************************ 00:20:23.299 START TEST nvmf_multiconnection 00:20:23.299 ************************************ 00:20:23.299 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:23.299 * Looking for test storage... 00:20:23.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:23.299 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:23.299 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:20:23.299 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:23.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.299 --rc genhtml_branch_coverage=1 00:20:23.299 --rc genhtml_function_coverage=1 00:20:23.299 --rc genhtml_legend=1 00:20:23.299 --rc geninfo_all_blocks=1 00:20:23.299 --rc geninfo_unexecuted_blocks=1 00:20:23.299 00:20:23.299 ' 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:23.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.299 --rc genhtml_branch_coverage=1 00:20:23.299 --rc genhtml_function_coverage=1 00:20:23.299 --rc genhtml_legend=1 00:20:23.299 --rc geninfo_all_blocks=1 00:20:23.299 --rc geninfo_unexecuted_blocks=1 00:20:23.299 00:20:23.299 ' 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:23.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.299 --rc genhtml_branch_coverage=1 00:20:23.299 --rc genhtml_function_coverage=1 00:20:23.299 --rc genhtml_legend=1 00:20:23.299 --rc geninfo_all_blocks=1 00:20:23.299 --rc geninfo_unexecuted_blocks=1 00:20:23.299 00:20:23.299 ' 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:23.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.299 --rc genhtml_branch_coverage=1 00:20:23.299 --rc genhtml_function_coverage=1 00:20:23.299 --rc genhtml_legend=1 00:20:23.299 --rc geninfo_all_blocks=1 00:20:23.299 --rc geninfo_unexecuted_blocks=1 00:20:23.299 00:20:23.299 ' 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.299 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:23.300 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:23.300 Cannot find device "nvmf_init_br" 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:23.300 Cannot find device "nvmf_init_br2" 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:23.300 Cannot find device "nvmf_tgt_br" 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.300 Cannot find device "nvmf_tgt_br2" 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:23.300 Cannot find device "nvmf_init_br" 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:20:23.300 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:23.558 Cannot find device "nvmf_init_br2" 00:20:23.558 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:20:23.558 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:23.558 Cannot find device "nvmf_tgt_br" 00:20:23.558 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:20:23.558 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:23.558 Cannot find device "nvmf_tgt_br2" 00:20:23.558 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:20:23.558 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:23.558 Cannot find device "nvmf_br" 00:20:23.558 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:23.559 Cannot find device "nvmf_init_if" 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:23.559 Cannot find device "nvmf_init_if2" 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:23.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:23.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:23.559 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:23.817 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:23.817 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:23.817 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:23.818 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:23.818 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:20:23.818 00:20:23.818 --- 10.0.0.3 ping statistics --- 00:20:23.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.818 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:23.818 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:23.818 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.112 ms 00:20:23.818 00:20:23.818 --- 10.0.0.4 ping statistics --- 00:20:23.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.818 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:23.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:23.818 00:20:23.818 --- 10.0.0.1 ping statistics --- 00:20:23.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.818 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:23.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:20:23.818 00:20:23.818 --- 10.0.0.2 ping statistics --- 00:20:23.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.818 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=77812 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 77812 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 77812 ']' 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.818 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:23.818 [2024-12-10 11:23:30.566575] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:20:23.818 [2024-12-10 11:23:30.566736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.076 [2024-12-10 11:23:30.759865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.076 [2024-12-10 11:23:30.895812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.076 [2024-12-10 11:23:30.895883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.076 [2024-12-10 11:23:30.895906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.076 [2024-12-10 11:23:30.895922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.077 [2024-12-10 11:23:30.895938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.077 [2024-12-10 11:23:30.898158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.077 [2024-12-10 11:23:30.898275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.077 [2024-12-10 11:23:30.898617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.077 [2024-12-10 11:23:30.899118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.336 [2024-12-10 11:23:31.132986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:24.904 [2024-12-10 11:23:31.610453] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:24.904 Malloc1 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.904 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.163 [2024-12-10 11:23:31.731879] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.163 Malloc2 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.163 Malloc3 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.163 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.421 Malloc4 00:20:25.421 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.421 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:20:25.421 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.421 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.421 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.421 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:25.421 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.421 Malloc5 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.421 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.422 Malloc6 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.422 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.680 Malloc7 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.680 Malloc8 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:20:25.680 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.681 Malloc9 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.681 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.939 Malloc10 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.939 Malloc11 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:25.939 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:20:26.198 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:20:26.198 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:20:26.198 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:26.198 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:26.198 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:20:28.100 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:28.100 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:28.100 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:20:28.100 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:28.100 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:28.100 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:20:28.100 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:28.100 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:20:28.359 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:20:28.359 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:20:28.359 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:28.359 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:28.359 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:20:30.260 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:30.260 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:30.260 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:20:30.260 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:30.260 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:30.260 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:20:30.260 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:30.260 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:20:30.518 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:20:30.518 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:20:30.518 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:30.518 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:30.518 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:20:32.418 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:32.418 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:20:32.418 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:32.418 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:32.418 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:32.418 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:20:32.418 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:32.418 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:20:32.677 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:20:32.677 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:20:32.677 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:32.677 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:32.677 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:20:34.579 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:34.579 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:34.579 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:20:34.579 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:34.579 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:34.579 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:20:34.579 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:34.579 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:20:34.837 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:20:34.837 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:20:34.837 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:34.837 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:34.837 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:20:36.740 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:36.740 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:36.740 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:20:36.740 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:36.740 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:36.740 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:20:36.740 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:36.740 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:20:36.999 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:20:36.999 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:20:36.999 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:36.999 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:36.999 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:20:38.901 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:38.901 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:38.901 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:20:38.901 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:38.901 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:38.901 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:20:38.901 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:38.901 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:20:39.160 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:20:39.160 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:20:39.160 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:39.160 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:39.160 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:20:41.061 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:41.061 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:41.061 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:20:41.061 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:41.061 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:41.061 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:20:41.061 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:41.061 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:20:41.320 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:20:41.320 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:20:41.320 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:41.320 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:41.320 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:20:43.259 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:43.259 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:43.259 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:20:43.259 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:43.259 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:43.259 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:20:43.259 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:43.259 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:20:43.517 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:20:43.517 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:20:43.517 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:43.517 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:43.517 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:20:45.417 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:45.417 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:45.417 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:20:45.417 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:45.417 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:45.417 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:20:45.417 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:45.417 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:20:45.674 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:20:45.674 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:20:45.674 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:45.674 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:45.674 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:20:47.576 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:47.576 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:47.576 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:20:47.576 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:47.576 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:47.576 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:20:47.576 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.576 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:20:47.858 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:20:47.858 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:20:47.858 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:47.858 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:47.858 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:20:49.761 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:49.761 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:49.761 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:20:49.761 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:49.761 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:49.761 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:20:49.761 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:20:49.761 [global] 00:20:49.761 thread=1 00:20:49.761 invalidate=1 00:20:49.761 rw=read 00:20:49.761 time_based=1 00:20:49.761 runtime=10 00:20:49.761 ioengine=libaio 00:20:49.761 direct=1 00:20:49.761 bs=262144 00:20:49.761 iodepth=64 00:20:49.761 norandommap=1 00:20:49.761 numjobs=1 00:20:49.761 00:20:49.761 [job0] 00:20:49.761 filename=/dev/nvme0n1 00:20:49.761 [job1] 00:20:49.761 filename=/dev/nvme10n1 00:20:49.761 [job2] 00:20:49.761 filename=/dev/nvme1n1 00:20:49.761 [job3] 00:20:49.761 filename=/dev/nvme2n1 00:20:49.761 [job4] 00:20:49.761 filename=/dev/nvme3n1 00:20:49.761 [job5] 00:20:49.761 filename=/dev/nvme4n1 00:20:49.761 [job6] 00:20:49.761 filename=/dev/nvme5n1 00:20:49.761 [job7] 00:20:49.761 filename=/dev/nvme6n1 00:20:49.761 [job8] 00:20:49.761 filename=/dev/nvme7n1 00:20:49.761 [job9] 00:20:49.761 filename=/dev/nvme8n1 00:20:50.019 [job10] 00:20:50.019 filename=/dev/nvme9n1 00:20:50.019 Could not set queue depth (nvme0n1) 00:20:50.019 Could not set queue depth (nvme10n1) 00:20:50.019 Could not set queue depth (nvme1n1) 00:20:50.019 Could not set queue depth (nvme2n1) 00:20:50.019 Could not set queue depth (nvme3n1) 00:20:50.019 Could not set queue depth (nvme4n1) 00:20:50.019 Could not set queue depth (nvme5n1) 00:20:50.019 Could not set queue depth (nvme6n1) 00:20:50.019 Could not set queue depth (nvme7n1) 00:20:50.019 Could not set queue depth (nvme8n1) 00:20:50.019 Could not set queue depth (nvme9n1) 00:20:50.020 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:50.020 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:50.020 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:50.020 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:50.020 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:50.020 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:50.020 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:50.020 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:50.020 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:50.020 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:50.020 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:50.020 fio-3.35 00:20:50.020 Starting 11 threads 00:21:02.253 00:21:02.253 job0: (groupid=0, jobs=1): err= 0: pid=78268: Tue Dec 10 11:24:07 2024 00:21:02.253 read: IOPS=88, BW=22.1MiB/s (23.2MB/s)(226MiB/10183msec) 00:21:02.253 slat (usec): min=19, max=449083, avg=11160.73, stdev=35619.73 00:21:02.253 clat (msec): min=27, max=1129, avg=709.60, stdev=148.64 00:21:02.253 lat (msec): min=34, max=1129, avg=720.76, stdev=150.49 00:21:02.253 clat percentiles (msec): 00:21:02.253 | 1.00th=[ 50], 5.00th=[ 409], 10.00th=[ 625], 20.00th=[ 651], 00:21:02.253 | 30.00th=[ 676], 40.00th=[ 701], 50.00th=[ 735], 60.00th=[ 751], 00:21:02.253 | 70.00th=[ 776], 80.00th=[ 802], 90.00th=[ 852], 95.00th=[ 894], 00:21:02.253 | 99.00th=[ 978], 99.50th=[ 1003], 99.90th=[ 1133], 99.95th=[ 1133], 00:21:02.253 | 99.99th=[ 1133] 00:21:02.253 bw ( KiB/s): min=14336, max=30208, per=4.00%, avg=21449.80, stdev=4684.95, samples=20 00:21:02.253 iops : min= 56, max= 118, avg=83.70, stdev=18.43, samples=20 00:21:02.253 lat (msec) : 50=1.11%, 100=0.89%, 250=0.11%, 500=4.21%, 750=53.77% 00:21:02.253 lat (msec) : 1000=39.36%, 2000=0.55% 00:21:02.253 cpu : usr=0.09%, sys=0.39%, ctx=186, majf=0, minf=4097 00:21:02.253 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:21:02.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.253 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:02.253 issued rwts: total=902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.253 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:02.253 job1: (groupid=0, jobs=1): err= 0: pid=78269: Tue Dec 10 11:24:07 2024 00:21:02.253 read: IOPS=131, BW=32.8MiB/s (34.4MB/s)(333MiB/10133msec) 00:21:02.253 slat (usec): min=16, max=463291, avg=7058.58, stdev=22772.88 00:21:02.253 clat (msec): min=132, max=1048, avg=479.66, stdev=125.20 00:21:02.253 lat (msec): min=156, max=1158, avg=486.72, stdev=126.94 00:21:02.253 clat percentiles (msec): 00:21:02.253 | 1.00th=[ 182], 5.00th=[ 347], 10.00th=[ 393], 20.00th=[ 422], 00:21:02.253 | 30.00th=[ 439], 40.00th=[ 456], 50.00th=[ 468], 60.00th=[ 481], 00:21:02.253 | 70.00th=[ 493], 80.00th=[ 514], 90.00th=[ 542], 95.00th=[ 693], 00:21:02.253 | 99.00th=[ 978], 99.50th=[ 1036], 99.90th=[ 1045], 99.95th=[ 1045], 00:21:02.253 | 99.99th=[ 1045] 00:21:02.253 bw ( KiB/s): min= 9216, max=43520, per=6.05%, avg=32432.10, stdev=7049.00, samples=20 00:21:02.253 iops : min= 36, max= 170, avg=126.65, stdev=27.55, samples=20 00:21:02.253 lat (msec) : 250=2.33%, 500=71.13%, 750=21.80%, 1000=3.98%, 2000=0.75% 00:21:02.253 cpu : usr=0.04%, sys=0.61%, ctx=265, majf=0, minf=4097 00:21:02.253 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:21:02.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.253 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:02.253 issued rwts: total=1330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.253 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:02.253 job2: (groupid=0, jobs=1): err= 0: pid=78270: Tue Dec 10 11:24:07 2024 00:21:02.253 read: IOPS=92, BW=23.0MiB/s (24.1MB/s)(235MiB/10192msec) 00:21:02.253 slat (usec): min=17, max=536260, avg=10676.09, stdev=32971.29 00:21:02.253 clat (msec): min=18, max=1038, avg=683.51, stdev=200.27 00:21:02.253 lat (msec): min=18, max=1235, avg=694.19, stdev=202.72 00:21:02.253 clat percentiles (msec): 00:21:02.253 | 1.00th=[ 25], 5.00th=[ 197], 10.00th=[ 321], 20.00th=[ 642], 00:21:02.253 | 30.00th=[ 667], 40.00th=[ 684], 50.00th=[ 718], 60.00th=[ 751], 00:21:02.253 | 70.00th=[ 776], 80.00th=[ 802], 90.00th=[ 852], 95.00th=[ 953], 00:21:02.253 | 99.00th=[ 1020], 99.50th=[ 1020], 99.90th=[ 1036], 99.95th=[ 1036], 00:21:02.253 | 99.99th=[ 1036] 00:21:02.253 bw ( KiB/s): min= 5120, max=41900, per=4.17%, avg=22370.20, stdev=7694.95, samples=20 00:21:02.253 iops : min= 20, max= 163, avg=87.35, stdev=29.97, samples=20 00:21:02.253 lat (msec) : 20=0.32%, 50=1.81%, 250=6.08%, 500=3.41%, 750=49.25% 00:21:02.253 lat (msec) : 1000=36.25%, 2000=2.88% 00:21:02.253 cpu : usr=0.04%, sys=0.45%, ctx=171, majf=0, minf=4097 00:21:02.253 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.3% 00:21:02.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.253 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:02.253 issued rwts: total=938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.253 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:02.253 job3: (groupid=0, jobs=1): err= 0: pid=78271: Tue Dec 10 11:24:07 2024 00:21:02.253 read: IOPS=150, BW=37.7MiB/s (39.6MB/s)(383MiB/10139msec) 00:21:02.253 slat (usec): min=19, max=118535, avg=6421.22, stdev=17305.83 00:21:02.253 clat (msec): min=11, max=585, avg=416.96, stdev=95.14 00:21:02.254 lat (msec): min=11, max=593, avg=423.38, stdev=96.42 00:21:02.254 clat percentiles (msec): 00:21:02.254 | 1.00th=[ 59], 5.00th=[ 222], 10.00th=[ 284], 20.00th=[ 338], 00:21:02.254 | 30.00th=[ 397], 40.00th=[ 430], 50.00th=[ 451], 60.00th=[ 460], 00:21:02.254 | 70.00th=[ 472], 80.00th=[ 485], 90.00th=[ 510], 95.00th=[ 531], 00:21:02.254 | 99.00th=[ 550], 99.50th=[ 558], 99.90th=[ 584], 99.95th=[ 584], 00:21:02.254 | 99.99th=[ 584] 00:21:02.254 bw ( KiB/s): min=32256, max=55919, per=7.01%, avg=37561.75, stdev=6402.61, samples=20 00:21:02.254 iops : min= 126, max= 218, avg=146.60, stdev=24.92, samples=20 00:21:02.254 lat (msec) : 20=0.07%, 50=0.85%, 100=0.26%, 250=5.75%, 500=79.74% 00:21:02.254 lat (msec) : 750=13.33% 00:21:02.254 cpu : usr=0.08%, sys=0.65%, ctx=297, majf=0, minf=4097 00:21:02.254 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:21:02.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.254 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:02.254 issued rwts: total=1530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:02.254 job4: (groupid=0, jobs=1): err= 0: pid=78272: Tue Dec 10 11:24:07 2024 00:21:02.254 read: IOPS=89, BW=22.4MiB/s (23.5MB/s)(228MiB/10185msec) 00:21:02.254 slat (usec): min=17, max=575196, avg=10886.03, stdev=37053.43 00:21:02.254 clat (msec): min=28, max=965, avg=701.98, stdev=210.20 00:21:02.254 lat (msec): min=29, max=1284, avg=712.87, stdev=212.30 00:21:02.254 clat percentiles (msec): 00:21:02.254 | 1.00th=[ 42], 5.00th=[ 88], 10.00th=[ 550], 20.00th=[ 651], 00:21:02.254 | 30.00th=[ 676], 40.00th=[ 701], 50.00th=[ 743], 60.00th=[ 785], 00:21:02.254 | 70.00th=[ 818], 80.00th=[ 844], 90.00th=[ 894], 95.00th=[ 919], 00:21:02.254 | 99.00th=[ 953], 99.50th=[ 953], 99.90th=[ 969], 99.95th=[ 969], 00:21:02.254 | 99.99th=[ 969] 00:21:02.254 bw ( KiB/s): min=13824, max=33346, per=4.06%, avg=21751.55, stdev=6375.27, samples=20 00:21:02.254 iops : min= 54, max= 130, avg=84.90, stdev=24.87, samples=20 00:21:02.254 lat (msec) : 50=1.86%, 100=4.93%, 250=1.10%, 500=1.86%, 750=42.06% 00:21:02.254 lat (msec) : 1000=48.19% 00:21:02.254 cpu : usr=0.06%, sys=0.45%, ctx=183, majf=0, minf=4097 00:21:02.254 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:21:02.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.254 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:02.254 issued rwts: total=913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:02.254 job5: (groupid=0, jobs=1): err= 0: pid=78273: Tue Dec 10 11:24:07 2024 00:21:02.254 read: IOPS=149, BW=37.5MiB/s (39.3MB/s)(380MiB/10150msec) 00:21:02.254 slat (usec): min=20, max=107650, avg=6570.48, stdev=16844.77 00:21:02.254 clat (msec): min=72, max=580, avg=419.71, stdev=94.94 00:21:02.254 lat (msec): min=72, max=580, avg=426.28, stdev=95.56 00:21:02.254 clat percentiles (msec): 00:21:02.254 | 1.00th=[ 83], 5.00th=[ 241], 10.00th=[ 288], 20.00th=[ 351], 00:21:02.254 | 30.00th=[ 388], 40.00th=[ 414], 50.00th=[ 439], 60.00th=[ 464], 00:21:02.254 | 70.00th=[ 477], 80.00th=[ 502], 90.00th=[ 523], 95.00th=[ 535], 00:21:02.254 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 575], 99.95th=[ 584], 00:21:02.254 | 99.99th=[ 584] 00:21:02.254 bw ( KiB/s): min=27136, max=51712, per=6.96%, avg=37324.80, stdev=6446.24, samples=20 00:21:02.254 iops : min= 106, max= 202, avg=145.80, stdev=25.18, samples=20 00:21:02.254 lat (msec) : 100=1.71%, 250=4.60%, 500=73.11%, 750=20.58% 00:21:02.254 cpu : usr=0.10%, sys=0.69%, ctx=286, majf=0, minf=4097 00:21:02.254 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.9% 00:21:02.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.254 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:02.254 issued rwts: total=1521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:02.254 job6: (groupid=0, jobs=1): err= 0: pid=78274: Tue Dec 10 11:24:07 2024 00:21:02.254 read: IOPS=207, BW=51.9MiB/s (54.5MB/s)(529MiB/10184msec) 00:21:02.254 slat (usec): min=16, max=297157, avg=4371.52, stdev=19791.34 00:21:02.254 clat (msec): min=19, max=1159, avg=302.94, stdev=336.66 00:21:02.254 lat (msec): min=19, max=1159, avg=307.31, stdev=341.07 00:21:02.254 clat percentiles (msec): 00:21:02.254 | 1.00th=[ 28], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 53], 00:21:02.254 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 67], 00:21:02.254 | 70.00th=[ 651], 80.00th=[ 735], 90.00th=[ 818], 95.00th=[ 860], 00:21:02.254 | 99.00th=[ 885], 99.50th=[ 961], 99.90th=[ 978], 99.95th=[ 978], 00:21:02.254 | 99.99th=[ 1167] 00:21:02.254 bw ( KiB/s): min= 4608, max=288768, per=9.80%, avg=52560.40, stdev=80373.47, samples=20 00:21:02.254 iops : min= 18, max= 1128, avg=205.25, stdev=313.96, samples=20 00:21:02.254 lat (msec) : 20=0.14%, 50=11.72%, 100=50.80%, 250=1.70%, 500=1.13% 00:21:02.254 lat (msec) : 750=16.45%, 1000=18.01%, 2000=0.05% 00:21:02.254 cpu : usr=0.11%, sys=0.85%, ctx=387, majf=0, minf=4097 00:21:02.254 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:21:02.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:02.254 issued rwts: total=2116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:02.254 job7: (groupid=0, jobs=1): err= 0: pid=78275: Tue Dec 10 11:24:07 2024 00:21:02.254 read: IOPS=150, BW=37.6MiB/s (39.5MB/s)(382MiB/10147msec) 00:21:02.254 slat (usec): min=19, max=122999, avg=6548.51, stdev=16511.63 00:21:02.254 clat (msec): min=122, max=599, avg=417.66, stdev=86.43 00:21:02.254 lat (msec): min=122, max=599, avg=424.21, stdev=87.48 00:21:02.254 clat percentiles (msec): 00:21:02.254 | 1.00th=[ 131], 5.00th=[ 245], 10.00th=[ 284], 20.00th=[ 355], 00:21:02.254 | 30.00th=[ 409], 40.00th=[ 439], 50.00th=[ 451], 60.00th=[ 460], 00:21:02.254 | 70.00th=[ 468], 80.00th=[ 477], 90.00th=[ 493], 95.00th=[ 506], 00:21:02.254 | 99.00th=[ 542], 99.50th=[ 575], 99.90th=[ 600], 99.95th=[ 600], 00:21:02.254 | 99.99th=[ 600] 00:21:02.254 bw ( KiB/s): min=32256, max=57344, per=6.99%, avg=37478.40, stdev=6424.73, samples=20 00:21:02.254 iops : min= 126, max= 224, avg=146.30, stdev=25.14, samples=20 00:21:02.254 lat (msec) : 250=5.37%, 500=88.35%, 750=6.28% 00:21:02.254 cpu : usr=0.08%, sys=0.64%, ctx=304, majf=0, minf=4097 00:21:02.254 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:21:02.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.254 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:02.254 issued rwts: total=1528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:02.254 job8: (groupid=0, jobs=1): err= 0: pid=78276: Tue Dec 10 11:24:07 2024 00:21:02.254 read: IOPS=480, BW=120MiB/s (126MB/s)(1209MiB/10061msec) 00:21:02.254 slat (usec): min=17, max=41255, avg=2062.75, stdev=4767.34 00:21:02.254 clat (msec): min=27, max=185, avg=130.84, stdev=13.75 00:21:02.254 lat (msec): min=27, max=208, avg=132.91, stdev=14.04 00:21:02.254 clat percentiles (msec): 00:21:02.254 | 1.00th=[ 95], 5.00th=[ 113], 10.00th=[ 118], 20.00th=[ 124], 00:21:02.254 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 133], 00:21:02.254 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 146], 95.00th=[ 153], 00:21:02.254 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 182], 99.95th=[ 186], 00:21:02.254 | 99.99th=[ 186] 00:21:02.254 bw ( KiB/s): min=99328, max=128000, per=22.79%, avg=122214.40, stdev=7278.97, samples=20 00:21:02.254 iops : min= 388, max= 500, avg=477.40, stdev=28.43, samples=20 00:21:02.254 lat (msec) : 50=0.39%, 100=1.05%, 250=98.55% 00:21:02.254 cpu : usr=0.33%, sys=1.92%, ctx=1014, majf=0, minf=4097 00:21:02.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:02.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:02.254 issued rwts: total=4837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:02.254 job9: (groupid=0, jobs=1): err= 0: pid=78277: Tue Dec 10 11:24:07 2024 00:21:02.254 read: IOPS=89, BW=22.4MiB/s (23.5MB/s)(228MiB/10184msec) 00:21:02.254 slat (usec): min=19, max=254846, avg=11068.45, stdev=31958.96 00:21:02.254 clat (msec): min=84, max=990, avg=701.98, stdev=153.01 00:21:02.254 lat (msec): min=101, max=990, avg=713.05, stdev=154.91 00:21:02.254 clat percentiles (msec): 00:21:02.254 | 1.00th=[ 109], 5.00th=[ 355], 10.00th=[ 558], 20.00th=[ 651], 00:21:02.254 | 30.00th=[ 676], 40.00th=[ 709], 50.00th=[ 735], 60.00th=[ 751], 00:21:02.254 | 70.00th=[ 776], 80.00th=[ 802], 90.00th=[ 835], 95.00th=[ 869], 00:21:02.254 | 99.00th=[ 944], 99.50th=[ 995], 99.90th=[ 995], 99.95th=[ 995], 00:21:02.254 | 99.99th=[ 995] 00:21:02.254 bw ( KiB/s): min=14336, max=30720, per=4.05%, avg=21706.30, stdev=3569.95, samples=20 00:21:02.254 iops : min= 56, max= 120, avg=84.75, stdev=13.91, samples=20 00:21:02.254 lat (msec) : 100=0.11%, 250=3.40%, 500=4.82%, 750=51.32%, 1000=40.35% 00:21:02.254 cpu : usr=0.03%, sys=0.43%, ctx=168, majf=0, minf=4097 00:21:02.254 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:21:02.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.254 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:02.254 issued rwts: total=912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.254 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:02.254 job10: (groupid=0, jobs=1): err= 0: pid=78278: Tue Dec 10 11:24:07 2024 00:21:02.254 read: IOPS=479, BW=120MiB/s (126MB/s)(1205MiB/10053msec) 00:21:02.254 slat (usec): min=16, max=50944, avg=2053.47, stdev=4875.69 00:21:02.254 clat (msec): min=18, max=214, avg=131.24, stdev=13.91 00:21:02.254 lat (msec): min=19, max=214, avg=133.29, stdev=14.13 00:21:02.254 clat percentiles (msec): 00:21:02.254 | 1.00th=[ 87], 5.00th=[ 114], 10.00th=[ 118], 20.00th=[ 124], 00:21:02.254 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 134], 00:21:02.254 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 146], 95.00th=[ 153], 00:21:02.254 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 188], 99.95th=[ 188], 00:21:02.254 | 99.99th=[ 215] 00:21:02.254 bw ( KiB/s): min=93508, max=131334, per=22.72%, avg=121831.05, stdev=7833.55, samples=20 00:21:02.254 iops : min= 365, max= 513, avg=475.60, stdev=30.60, samples=20 00:21:02.254 lat (msec) : 20=0.06%, 50=0.02%, 100=1.70%, 250=98.22% 00:21:02.254 cpu : usr=0.32%, sys=1.98%, ctx=978, majf=0, minf=4097 00:21:02.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:02.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:02.255 issued rwts: total=4820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.255 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:02.255 00:21:02.255 Run status group 0 (all jobs): 00:21:02.255 READ: bw=524MiB/s (549MB/s), 22.1MiB/s-120MiB/s (23.2MB/s-126MB/s), io=5337MiB (5596MB), run=10053-10192msec 00:21:02.255 00:21:02.255 Disk stats (read/write): 00:21:02.255 nvme0n1: ios=1676/0, merge=0/0, ticks=1194227/0, in_queue=1194227, util=97.55% 00:21:02.255 nvme10n1: ios=2533/0, merge=0/0, ticks=1212890/0, in_queue=1212890, util=97.74% 00:21:02.255 nvme1n1: ios=1748/0, merge=0/0, ticks=1192134/0, in_queue=1192134, util=98.17% 00:21:02.255 nvme2n1: ios=2936/0, merge=0/0, ticks=1214659/0, in_queue=1214659, util=98.08% 00:21:02.255 nvme3n1: ios=1698/0, merge=0/0, ticks=1190775/0, in_queue=1190775, util=98.15% 00:21:02.255 nvme4n1: ios=2922/0, merge=0/0, ticks=1220525/0, in_queue=1220525, util=98.47% 00:21:02.255 nvme5n1: ios=4107/0, merge=0/0, ticks=1194001/0, in_queue=1194001, util=98.46% 00:21:02.255 nvme6n1: ios=2929/0, merge=0/0, ticks=1216037/0, in_queue=1216037, util=98.57% 00:21:02.255 nvme7n1: ios=9550/0, merge=0/0, ticks=1235552/0, in_queue=1235552, util=98.94% 00:21:02.255 nvme8n1: ios=1699/0, merge=0/0, ticks=1205807/0, in_queue=1205807, util=98.80% 00:21:02.255 nvme9n1: ios=9512/0, merge=0/0, ticks=1235710/0, in_queue=1235710, util=99.09% 00:21:02.255 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:02.255 [global] 00:21:02.255 thread=1 00:21:02.255 invalidate=1 00:21:02.255 rw=randwrite 00:21:02.255 time_based=1 00:21:02.255 runtime=10 00:21:02.255 ioengine=libaio 00:21:02.255 direct=1 00:21:02.255 bs=262144 00:21:02.255 iodepth=64 00:21:02.255 norandommap=1 00:21:02.255 numjobs=1 00:21:02.255 00:21:02.255 [job0] 00:21:02.255 filename=/dev/nvme0n1 00:21:02.255 [job1] 00:21:02.255 filename=/dev/nvme10n1 00:21:02.255 [job2] 00:21:02.255 filename=/dev/nvme1n1 00:21:02.255 [job3] 00:21:02.255 filename=/dev/nvme2n1 00:21:02.255 [job4] 00:21:02.255 filename=/dev/nvme3n1 00:21:02.255 [job5] 00:21:02.255 filename=/dev/nvme4n1 00:21:02.255 [job6] 00:21:02.255 filename=/dev/nvme5n1 00:21:02.255 [job7] 00:21:02.255 filename=/dev/nvme6n1 00:21:02.255 [job8] 00:21:02.255 filename=/dev/nvme7n1 00:21:02.255 [job9] 00:21:02.255 filename=/dev/nvme8n1 00:21:02.255 [job10] 00:21:02.255 filename=/dev/nvme9n1 00:21:02.255 Could not set queue depth (nvme0n1) 00:21:02.255 Could not set queue depth (nvme10n1) 00:21:02.255 Could not set queue depth (nvme1n1) 00:21:02.255 Could not set queue depth (nvme2n1) 00:21:02.255 Could not set queue depth (nvme3n1) 00:21:02.255 Could not set queue depth (nvme4n1) 00:21:02.255 Could not set queue depth (nvme5n1) 00:21:02.255 Could not set queue depth (nvme6n1) 00:21:02.255 Could not set queue depth (nvme7n1) 00:21:02.255 Could not set queue depth (nvme8n1) 00:21:02.255 Could not set queue depth (nvme9n1) 00:21:02.255 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:02.255 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:02.255 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:02.255 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:02.255 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:02.255 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:02.255 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:02.255 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:02.255 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:02.255 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:02.255 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:02.255 fio-3.35 00:21:02.255 Starting 11 threads 00:21:12.228 00:21:12.228 job0: (groupid=0, jobs=1): err= 0: pid=78478: Tue Dec 10 11:24:18 2024 00:21:12.228 write: IOPS=499, BW=125MiB/s (131MB/s)(1260MiB/10098msec); 0 zone resets 00:21:12.228 slat (usec): min=15, max=100722, avg=1978.95, stdev=3647.82 00:21:12.228 clat (msec): min=94, max=331, avg=126.23, stdev=16.61 00:21:12.228 lat (msec): min=101, max=331, avg=128.21, stdev=16.41 00:21:12.228 clat percentiles (msec): 00:21:12.228 | 1.00th=[ 116], 5.00th=[ 117], 10.00th=[ 118], 20.00th=[ 121], 00:21:12.228 | 30.00th=[ 125], 40.00th=[ 125], 50.00th=[ 126], 60.00th=[ 126], 00:21:12.228 | 70.00th=[ 126], 80.00th=[ 128], 90.00th=[ 129], 95.00th=[ 130], 00:21:12.228 | 99.00th=[ 213], 99.50th=[ 262], 99.90th=[ 321], 99.95th=[ 321], 00:21:12.228 | 99.99th=[ 330] 00:21:12.228 bw ( KiB/s): min=72192, max=133120, per=13.22%, avg=127385.60, stdev=13083.68, samples=20 00:21:12.228 iops : min= 282, max= 520, avg=497.60, stdev=51.11, samples=20 00:21:12.228 lat (msec) : 100=0.02%, 250=99.46%, 500=0.52% 00:21:12.228 cpu : usr=0.94%, sys=1.32%, ctx=5840, majf=0, minf=1 00:21:12.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:12.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.228 issued rwts: total=0,5039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.228 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.228 job1: (groupid=0, jobs=1): err= 0: pid=78479: Tue Dec 10 11:24:18 2024 00:21:12.228 write: IOPS=257, BW=64.4MiB/s (67.6MB/s)(656MiB/10179msec); 0 zone resets 00:21:12.228 slat (usec): min=17, max=24633, avg=3757.39, stdev=6667.14 00:21:12.228 clat (msec): min=19, max=420, avg=244.41, stdev=30.81 00:21:12.228 lat (msec): min=19, max=420, avg=248.17, stdev=30.66 00:21:12.228 clat percentiles (msec): 00:21:12.229 | 1.00th=[ 93], 5.00th=[ 197], 10.00th=[ 232], 20.00th=[ 239], 00:21:12.229 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:21:12.229 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 259], 95.00th=[ 262], 00:21:12.229 | 99.00th=[ 313], 99.50th=[ 376], 99.90th=[ 409], 99.95th=[ 422], 00:21:12.229 | 99.99th=[ 422] 00:21:12.229 bw ( KiB/s): min=63361, max=81920, per=6.80%, avg=65529.65, stdev=3990.29, samples=20 00:21:12.229 iops : min= 247, max= 320, avg=255.95, stdev=15.60, samples=20 00:21:12.229 lat (msec) : 20=0.04%, 50=0.30%, 100=0.72%, 250=44.40%, 500=54.54% 00:21:12.229 cpu : usr=0.56%, sys=0.69%, ctx=2849, majf=0, minf=1 00:21:12.229 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:21:12.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.229 issued rwts: total=0,2624,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.229 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.229 job2: (groupid=0, jobs=1): err= 0: pid=78491: Tue Dec 10 11:24:18 2024 00:21:12.229 write: IOPS=381, BW=95.3MiB/s (99.9MB/s)(966MiB/10140msec); 0 zone resets 00:21:12.229 slat (usec): min=16, max=36511, avg=2580.81, stdev=4481.66 00:21:12.229 clat (msec): min=15, max=304, avg=165.26, stdev=21.04 00:21:12.229 lat (msec): min=16, max=304, avg=167.84, stdev=20.88 00:21:12.229 clat percentiles (msec): 00:21:12.229 | 1.00th=[ 86], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:21:12.229 | 30.00th=[ 163], 40.00th=[ 163], 50.00th=[ 163], 60.00th=[ 165], 00:21:12.229 | 70.00th=[ 165], 80.00th=[ 167], 90.00th=[ 186], 95.00th=[ 203], 00:21:12.229 | 99.00th=[ 232], 99.50th=[ 253], 99.90th=[ 296], 99.95th=[ 305], 00:21:12.229 | 99.99th=[ 305] 00:21:12.229 bw ( KiB/s): min=83968, max=102400, per=10.10%, avg=97321.10, stdev=5996.84, samples=20 00:21:12.229 iops : min= 328, max= 400, avg=380.15, stdev=23.42, samples=20 00:21:12.229 lat (msec) : 20=0.10%, 50=0.41%, 100=0.65%, 250=98.27%, 500=0.57% 00:21:12.229 cpu : usr=0.54%, sys=1.27%, ctx=5639, majf=0, minf=1 00:21:12.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:12.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.229 issued rwts: total=0,3865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.229 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.229 job3: (groupid=0, jobs=1): err= 0: pid=78492: Tue Dec 10 11:24:18 2024 00:21:12.229 write: IOPS=260, BW=65.1MiB/s (68.2MB/s)(663MiB/10187msec); 0 zone resets 00:21:12.229 slat (usec): min=16, max=35698, avg=3770.33, stdev=6648.56 00:21:12.229 clat (msec): min=24, max=425, avg=242.05, stdev=36.22 00:21:12.229 lat (msec): min=24, max=425, avg=245.82, stdev=36.21 00:21:12.229 clat percentiles (msec): 00:21:12.229 | 1.00th=[ 87], 5.00th=[ 165], 10.00th=[ 230], 20.00th=[ 239], 00:21:12.229 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:21:12.229 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 259], 95.00th=[ 262], 00:21:12.229 | 99.00th=[ 321], 99.50th=[ 384], 99.90th=[ 414], 99.95th=[ 426], 00:21:12.229 | 99.99th=[ 426] 00:21:12.229 bw ( KiB/s): min=63488, max=95744, per=6.87%, avg=66220.80, stdev=7017.03, samples=20 00:21:12.229 iops : min= 248, max= 374, avg=258.65, stdev=27.42, samples=20 00:21:12.229 lat (msec) : 50=0.26%, 100=0.91%, 250=44.32%, 500=54.51% 00:21:12.229 cpu : usr=0.58%, sys=0.71%, ctx=875, majf=0, minf=1 00:21:12.229 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:21:12.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.229 issued rwts: total=0,2651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.229 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.229 job4: (groupid=0, jobs=1): err= 0: pid=78493: Tue Dec 10 11:24:18 2024 00:21:12.229 write: IOPS=258, BW=64.6MiB/s (67.7MB/s)(657MiB/10179msec); 0 zone resets 00:21:12.229 slat (usec): min=16, max=54611, avg=3797.61, stdev=6717.67 00:21:12.229 clat (msec): min=21, max=427, avg=243.89, stdev=33.97 00:21:12.229 lat (msec): min=21, max=427, avg=247.69, stdev=33.88 00:21:12.229 clat percentiles (msec): 00:21:12.229 | 1.00th=[ 82], 5.00th=[ 184], 10.00th=[ 232], 20.00th=[ 239], 00:21:12.229 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:21:12.229 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 259], 95.00th=[ 262], 00:21:12.229 | 99.00th=[ 321], 99.50th=[ 384], 99.90th=[ 414], 99.95th=[ 426], 00:21:12.229 | 99.99th=[ 426] 00:21:12.229 bw ( KiB/s): min=63361, max=86016, per=6.82%, avg=65683.25, stdev=4942.29, samples=20 00:21:12.229 iops : min= 247, max= 336, avg=256.55, stdev=19.32, samples=20 00:21:12.229 lat (msec) : 50=0.46%, 100=0.91%, 250=42.60%, 500=56.03% 00:21:12.229 cpu : usr=0.43%, sys=0.79%, ctx=2493, majf=0, minf=1 00:21:12.229 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:21:12.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.229 issued rwts: total=0,2629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.229 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.229 job5: (groupid=0, jobs=1): err= 0: pid=78494: Tue Dec 10 11:24:18 2024 00:21:12.229 write: IOPS=287, BW=71.8MiB/s (75.3MB/s)(730MiB/10168msec); 0 zone resets 00:21:12.229 slat (usec): min=15, max=20456, avg=3396.22, stdev=5974.38 00:21:12.229 clat (msec): min=19, max=386, avg=219.29, stdev=29.67 00:21:12.229 lat (msec): min=19, max=386, avg=222.68, stdev=29.57 00:21:12.229 clat percentiles (msec): 00:21:12.229 | 1.00th=[ 61], 5.00th=[ 203], 10.00th=[ 209], 20.00th=[ 213], 00:21:12.229 | 30.00th=[ 220], 40.00th=[ 222], 50.00th=[ 224], 60.00th=[ 226], 00:21:12.229 | 70.00th=[ 226], 80.00th=[ 228], 90.00th=[ 232], 95.00th=[ 243], 00:21:12.229 | 99.00th=[ 288], 99.50th=[ 330], 99.90th=[ 372], 99.95th=[ 388], 00:21:12.229 | 99.99th=[ 388] 00:21:12.229 bw ( KiB/s): min=65536, max=86188, per=7.59%, avg=73140.40, stdev=3606.98, samples=20 00:21:12.229 iops : min= 256, max= 336, avg=285.65, stdev=13.96, samples=20 00:21:12.229 lat (msec) : 20=0.14%, 50=0.68%, 100=0.99%, 250=95.00%, 500=3.18% 00:21:12.229 cpu : usr=0.59%, sys=0.75%, ctx=2416, majf=0, minf=1 00:21:12.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:21:12.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.229 issued rwts: total=0,2921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.229 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.229 job6: (groupid=0, jobs=1): err= 0: pid=78495: Tue Dec 10 11:24:18 2024 00:21:12.229 write: IOPS=289, BW=72.4MiB/s (75.9MB/s)(736MiB/10162msec); 0 zone resets 00:21:12.229 slat (usec): min=19, max=23299, avg=3296.87, stdev=5881.13 00:21:12.229 clat (msec): min=7, max=383, avg=217.54, stdev=30.54 00:21:12.229 lat (msec): min=7, max=383, avg=220.83, stdev=30.58 00:21:12.229 clat percentiles (msec): 00:21:12.229 | 1.00th=[ 52], 5.00th=[ 182], 10.00th=[ 207], 20.00th=[ 213], 00:21:12.229 | 30.00th=[ 218], 40.00th=[ 222], 50.00th=[ 224], 60.00th=[ 226], 00:21:12.229 | 70.00th=[ 226], 80.00th=[ 228], 90.00th=[ 230], 95.00th=[ 234], 00:21:12.229 | 99.00th=[ 284], 99.50th=[ 326], 99.90th=[ 372], 99.95th=[ 384], 00:21:12.229 | 99.99th=[ 384] 00:21:12.229 bw ( KiB/s): min=71168, max=85504, per=7.65%, avg=73746.40, stdev=3419.11, samples=20 00:21:12.229 iops : min= 278, max= 334, avg=288.05, stdev=13.37, samples=20 00:21:12.229 lat (msec) : 10=0.07%, 20=0.14%, 50=0.68%, 100=1.09%, 250=96.26% 00:21:12.229 lat (msec) : 500=1.77% 00:21:12.229 cpu : usr=0.56%, sys=0.88%, ctx=3723, majf=0, minf=1 00:21:12.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:21:12.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.229 issued rwts: total=0,2944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.229 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.229 job7: (groupid=0, jobs=1): err= 0: pid=78496: Tue Dec 10 11:24:18 2024 00:21:12.229 write: IOPS=377, BW=94.3MiB/s (98.9MB/s)(956MiB/10140msec); 0 zone resets 00:21:12.229 slat (usec): min=15, max=122865, avg=2609.45, stdev=4864.25 00:21:12.229 clat (msec): min=126, max=309, avg=166.98, stdev=18.60 00:21:12.229 lat (msec): min=126, max=309, avg=169.59, stdev=18.22 00:21:12.229 clat percentiles (msec): 00:21:12.229 | 1.00th=[ 153], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:21:12.229 | 30.00th=[ 163], 40.00th=[ 163], 50.00th=[ 163], 60.00th=[ 165], 00:21:12.229 | 70.00th=[ 165], 80.00th=[ 167], 90.00th=[ 186], 95.00th=[ 205], 00:21:12.229 | 99.00th=[ 249], 99.50th=[ 279], 99.90th=[ 300], 99.95th=[ 309], 00:21:12.229 | 99.99th=[ 309] 00:21:12.229 bw ( KiB/s): min=63488, max=102400, per=9.99%, avg=96297.15, stdev=9275.50, samples=20 00:21:12.229 iops : min= 248, max= 400, avg=376.15, stdev=36.23, samples=20 00:21:12.229 lat (msec) : 250=99.03%, 500=0.97% 00:21:12.229 cpu : usr=0.60%, sys=1.16%, ctx=5454, majf=0, minf=1 00:21:12.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:12.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.229 issued rwts: total=0,3825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.229 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.229 job8: (groupid=0, jobs=1): err= 0: pid=78497: Tue Dec 10 11:24:18 2024 00:21:12.229 write: IOPS=499, BW=125MiB/s (131MB/s)(1262MiB/10106msec); 0 zone resets 00:21:12.229 slat (usec): min=16, max=83519, avg=1976.67, stdev=3563.74 00:21:12.229 clat (msec): min=87, max=302, avg=126.15, stdev=15.36 00:21:12.229 lat (msec): min=88, max=302, avg=128.13, stdev=15.14 00:21:12.229 clat percentiles (msec): 00:21:12.229 | 1.00th=[ 116], 5.00th=[ 117], 10.00th=[ 118], 20.00th=[ 121], 00:21:12.229 | 30.00th=[ 125], 40.00th=[ 125], 50.00th=[ 126], 60.00th=[ 126], 00:21:12.229 | 70.00th=[ 127], 80.00th=[ 127], 90.00th=[ 129], 95.00th=[ 130], 00:21:12.229 | 99.00th=[ 218], 99.50th=[ 232], 99.90th=[ 288], 99.95th=[ 305], 00:21:12.229 | 99.99th=[ 305] 00:21:12.229 bw ( KiB/s): min=75927, max=131584, per=13.24%, avg=127559.25, stdev=12228.47, samples=20 00:21:12.229 iops : min= 296, max= 514, avg=498.20, stdev=47.89, samples=20 00:21:12.229 lat (msec) : 100=0.16%, 250=99.54%, 500=0.30% 00:21:12.229 cpu : usr=0.87%, sys=1.41%, ctx=5828, majf=0, minf=1 00:21:12.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:12.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.230 issued rwts: total=0,5046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.230 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.230 job9: (groupid=0, jobs=1): err= 0: pid=78498: Tue Dec 10 11:24:18 2024 00:21:12.230 write: IOPS=378, BW=94.7MiB/s (99.3MB/s)(960MiB/10140msec); 0 zone resets 00:21:12.230 slat (usec): min=13, max=79680, avg=2598.86, stdev=4633.69 00:21:12.230 clat (msec): min=81, max=302, avg=166.33, stdev=17.65 00:21:12.230 lat (msec): min=81, max=302, avg=168.93, stdev=17.27 00:21:12.230 clat percentiles (msec): 00:21:12.230 | 1.00th=[ 150], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:21:12.230 | 30.00th=[ 163], 40.00th=[ 163], 50.00th=[ 163], 60.00th=[ 165], 00:21:12.230 | 70.00th=[ 165], 80.00th=[ 167], 90.00th=[ 186], 95.00th=[ 203], 00:21:12.230 | 99.00th=[ 236], 99.50th=[ 262], 99.90th=[ 292], 99.95th=[ 305], 00:21:12.230 | 99.99th=[ 305] 00:21:12.230 bw ( KiB/s): min=71680, max=102400, per=10.03%, avg=96681.15, stdev=7833.05, samples=20 00:21:12.230 iops : min= 280, max= 400, avg=377.65, stdev=30.59, samples=20 00:21:12.230 lat (msec) : 100=0.21%, 250=99.14%, 500=0.65% 00:21:12.230 cpu : usr=0.51%, sys=1.17%, ctx=4849, majf=0, minf=1 00:21:12.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:12.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.230 issued rwts: total=0,3840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.230 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.230 job10: (groupid=0, jobs=1): err= 0: pid=78499: Tue Dec 10 11:24:18 2024 00:21:12.230 write: IOPS=290, BW=72.7MiB/s (76.3MB/s)(740MiB/10167msec); 0 zone resets 00:21:12.230 slat (usec): min=16, max=19826, avg=3274.89, stdev=5899.41 00:21:12.230 clat (msec): min=4, max=388, avg=216.58, stdev=36.31 00:21:12.230 lat (msec): min=4, max=388, avg=219.85, stdev=36.52 00:21:12.230 clat percentiles (msec): 00:21:12.230 | 1.00th=[ 27], 5.00th=[ 167], 10.00th=[ 207], 20.00th=[ 213], 00:21:12.230 | 30.00th=[ 218], 40.00th=[ 222], 50.00th=[ 224], 60.00th=[ 226], 00:21:12.230 | 70.00th=[ 226], 80.00th=[ 228], 90.00th=[ 230], 95.00th=[ 236], 00:21:12.230 | 99.00th=[ 288], 99.50th=[ 330], 99.90th=[ 372], 99.95th=[ 388], 00:21:12.230 | 99.99th=[ 388] 00:21:12.230 bw ( KiB/s): min=67584, max=105472, per=7.69%, avg=74093.75, stdev=7541.20, samples=20 00:21:12.230 iops : min= 264, max= 412, avg=289.40, stdev=29.46, samples=20 00:21:12.230 lat (msec) : 10=0.24%, 20=0.41%, 50=1.18%, 100=1.12%, 250=94.15% 00:21:12.230 lat (msec) : 500=2.91% 00:21:12.230 cpu : usr=0.55%, sys=0.89%, ctx=3879, majf=0, minf=2 00:21:12.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:21:12.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.230 issued rwts: total=0,2958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.230 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.230 00:21:12.230 Run status group 0 (all jobs): 00:21:12.230 WRITE: bw=941MiB/s (987MB/s), 64.4MiB/s-125MiB/s (67.6MB/s-131MB/s), io=9586MiB (10.1GB), run=10098-10187msec 00:21:12.230 00:21:12.230 Disk stats (read/write): 00:21:12.230 nvme0n1: ios=49/9896, merge=0/0, ticks=58/1209725, in_queue=1209783, util=97.58% 00:21:12.230 nvme10n1: ios=49/5101, merge=0/0, ticks=42/1204410, in_queue=1204452, util=97.81% 00:21:12.230 nvme1n1: ios=30/7575, merge=0/0, ticks=55/1208092, in_queue=1208147, util=97.97% 00:21:12.230 nvme2n1: ios=20/5160, merge=0/0, ticks=36/1204775, in_queue=1204811, util=98.09% 00:21:12.230 nvme3n1: ios=0/5116, merge=0/0, ticks=0/1203577, in_queue=1203577, util=98.02% 00:21:12.230 nvme4n1: ios=0/5696, merge=0/0, ticks=0/1206316, in_queue=1206316, util=98.32% 00:21:12.230 nvme5n1: ios=0/5740, merge=0/0, ticks=0/1206667, in_queue=1206667, util=98.37% 00:21:12.230 nvme6n1: ios=0/7487, merge=0/0, ticks=0/1207536, in_queue=1207536, util=98.37% 00:21:12.230 nvme7n1: ios=0/9923, merge=0/0, ticks=0/1210995, in_queue=1210995, util=98.68% 00:21:12.230 nvme8n1: ios=0/7519, merge=0/0, ticks=0/1207780, in_queue=1207780, util=98.76% 00:21:12.230 nvme9n1: ios=0/5770, merge=0/0, ticks=0/1207250, in_queue=1207250, util=98.95% 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:12.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:12.230 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:12.230 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:12.230 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:12.230 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:21:12.230 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:12.231 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:12.231 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:12.231 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.231 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:12.231 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.231 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:12.231 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:12.489 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:12.489 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:21:12.490 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:21:12.490 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:12.490 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:12.749 rmmod nvme_tcp 00:21:12.749 rmmod nvme_fabrics 00:21:12.749 rmmod nvme_keyring 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 77812 ']' 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 77812 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 77812 ']' 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 77812 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77812 00:21:12.749 killing process with pid 77812 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77812' 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 77812 00:21:12.749 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 77812 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:21:16.035 00:21:16.035 real 0m52.563s 00:21:16.035 user 3m2.226s 00:21:16.035 sys 0m23.942s 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.035 ************************************ 00:21:16.035 END TEST nvmf_multiconnection 00:21:16.035 ************************************ 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:16.035 ************************************ 00:21:16.035 START TEST nvmf_initiator_timeout 00:21:16.035 ************************************ 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:16.035 * Looking for test storage... 00:21:16.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:21:16.035 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:16.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.036 --rc genhtml_branch_coverage=1 00:21:16.036 --rc genhtml_function_coverage=1 00:21:16.036 --rc genhtml_legend=1 00:21:16.036 --rc geninfo_all_blocks=1 00:21:16.036 --rc geninfo_unexecuted_blocks=1 00:21:16.036 00:21:16.036 ' 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:16.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.036 --rc genhtml_branch_coverage=1 00:21:16.036 --rc genhtml_function_coverage=1 00:21:16.036 --rc genhtml_legend=1 00:21:16.036 --rc geninfo_all_blocks=1 00:21:16.036 --rc geninfo_unexecuted_blocks=1 00:21:16.036 00:21:16.036 ' 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:16.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.036 --rc genhtml_branch_coverage=1 00:21:16.036 --rc genhtml_function_coverage=1 00:21:16.036 --rc genhtml_legend=1 00:21:16.036 --rc geninfo_all_blocks=1 00:21:16.036 --rc geninfo_unexecuted_blocks=1 00:21:16.036 00:21:16.036 ' 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:16.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.036 --rc genhtml_branch_coverage=1 00:21:16.036 --rc genhtml_function_coverage=1 00:21:16.036 --rc genhtml_legend=1 00:21:16.036 --rc geninfo_all_blocks=1 00:21:16.036 --rc geninfo_unexecuted_blocks=1 00:21:16.036 00:21:16.036 ' 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.036 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:16.036 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:16.037 Cannot find device "nvmf_init_br" 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:16.037 Cannot find device "nvmf_init_br2" 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:16.037 Cannot find device "nvmf_tgt_br" 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:16.037 Cannot find device "nvmf_tgt_br2" 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:16.037 Cannot find device "nvmf_init_br" 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:16.037 Cannot find device "nvmf_init_br2" 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:16.037 Cannot find device "nvmf_tgt_br" 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:16.037 Cannot find device "nvmf_tgt_br2" 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:16.037 Cannot find device "nvmf_br" 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:16.037 Cannot find device "nvmf_init_if" 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:16.037 Cannot find device "nvmf_init_if2" 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:16.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:16.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:16.037 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:16.296 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:16.296 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:16.296 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:21:16.296 00:21:16.296 --- 10.0.0.3 ping statistics --- 00:21:16.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.296 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:16.296 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:16.296 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:21:16.296 00:21:16.296 --- 10.0.0.4 ping statistics --- 00:21:16.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.296 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:16.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:16.296 00:21:16.296 --- 10.0.0.1 ping statistics --- 00:21:16.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.296 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:16.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:21:16.296 00:21:16.296 --- 10.0.0.2 ping statistics --- 00:21:16.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.296 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:16.296 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=78939 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 78939 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 78939 ']' 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.297 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:16.556 [2024-12-10 11:24:23.208061] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:21:16.556 [2024-12-10 11:24:23.208427] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.815 [2024-12-10 11:24:23.398858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:16.815 [2024-12-10 11:24:23.506159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.815 [2024-12-10 11:24:23.506222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.815 [2024-12-10 11:24:23.506243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.815 [2024-12-10 11:24:23.506256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.815 [2024-12-10 11:24:23.506272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.815 [2024-12-10 11:24:23.508210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.815 [2024-12-10 11:24:23.508295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.815 [2024-12-10 11:24:23.508372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.815 [2024-12-10 11:24:23.508390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.074 [2024-12-10 11:24:23.692806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:17.640 Malloc0 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:17.640 Delay0 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:17.640 [2024-12-10 11:24:24.295775] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:17.640 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:17.641 [2024-12-10 11:24:24.328282] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:21:17.641 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:21:20.196 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:21:20.196 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:21:20.196 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:21:20.196 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:21:20.196 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:21:20.196 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:21:20.196 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=79003 00:21:20.196 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:21:20.196 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:21:20.196 [global] 00:21:20.196 thread=1 00:21:20.196 invalidate=1 00:21:20.196 rw=write 00:21:20.196 time_based=1 00:21:20.196 runtime=60 00:21:20.196 ioengine=libaio 00:21:20.196 direct=1 00:21:20.196 bs=4096 00:21:20.196 iodepth=1 00:21:20.196 norandommap=0 00:21:20.196 numjobs=1 00:21:20.196 00:21:20.196 verify_dump=1 00:21:20.196 verify_backlog=512 00:21:20.196 verify_state_save=0 00:21:20.196 do_verify=1 00:21:20.196 verify=crc32c-intel 00:21:20.196 [job0] 00:21:20.196 filename=/dev/nvme0n1 00:21:20.196 Could not set queue depth (nvme0n1) 00:21:20.196 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:20.196 fio-3.35 00:21:20.196 Starting 1 thread 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:22.725 true 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:22.725 true 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:22.725 true 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:22.725 true 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.725 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:26.011 true 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:26.011 true 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:26.011 true 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:26.011 true 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:21:26.011 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 79003 00:22:22.229 00:22:22.229 job0: (groupid=0, jobs=1): err= 0: pid=79024: Tue Dec 10 11:25:26 2024 00:22:22.229 read: IOPS=665, BW=2662KiB/s (2726kB/s)(156MiB/60001msec) 00:22:22.229 slat (nsec): min=11944, max=84326, avg=16722.63, stdev=5856.43 00:22:22.229 clat (usec): min=209, max=2184, avg=248.31, stdev=25.70 00:22:22.229 lat (usec): min=222, max=2199, avg=265.03, stdev=28.20 00:22:22.229 clat percentiles (usec): 00:22:22.229 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 231], 00:22:22.229 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:22:22.229 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 285], 00:22:22.229 | 99.00th=[ 347], 99.50th=[ 375], 99.90th=[ 416], 99.95th=[ 433], 00:22:22.229 | 99.99th=[ 506] 00:22:22.229 write: IOPS=673, BW=2693KiB/s (2758kB/s)(158MiB/60001msec); 0 zone resets 00:22:22.229 slat (usec): min=14, max=11854, avg=24.62, stdev=70.90 00:22:22.229 clat (usec): min=6, max=40530k, avg=1194.40, stdev=201650.26 00:22:22.229 lat (usec): min=169, max=40530k, avg=1219.01, stdev=201650.25 00:22:22.229 clat percentiles (usec): 00:22:22.229 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 174], 00:22:22.229 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:22:22.229 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 227], 00:22:22.229 | 99.00th=[ 260], 99.50th=[ 285], 99.90th=[ 359], 99.95th=[ 469], 00:22:22.229 | 99.99th=[ 1074] 00:22:22.229 bw ( KiB/s): min= 3992, max= 9688, per=100.00%, avg=8086.54, stdev=1111.34, samples=39 00:22:22.229 iops : min= 998, max= 2422, avg=2021.62, stdev=277.83, samples=39 00:22:22.229 lat (usec) : 10=0.01%, 250=80.35%, 500=19.62%, 750=0.01%, 1000=0.01% 00:22:22.229 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:22:22.229 cpu : usr=0.61%, sys=2.19%, ctx=80351, majf=0, minf=5 00:22:22.229 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:22.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.229 issued rwts: total=39936,40398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.229 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:22.229 00:22:22.229 Run status group 0 (all jobs): 00:22:22.229 READ: bw=2662KiB/s (2726kB/s), 2662KiB/s-2662KiB/s (2726kB/s-2726kB/s), io=156MiB (164MB), run=60001-60001msec 00:22:22.229 WRITE: bw=2693KiB/s (2758kB/s), 2693KiB/s-2693KiB/s (2758kB/s-2758kB/s), io=158MiB (165MB), run=60001-60001msec 00:22:22.229 00:22:22.229 Disk stats (read/write): 00:22:22.229 nvme0n1: ios=40192/40002, merge=0/0, ticks=10210/8005, in_queue=18215, util=99.70% 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:22.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:22.229 nvmf hotplug test: fio successful as expected 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:22.229 rmmod nvme_tcp 00:22:22.229 rmmod nvme_fabrics 00:22:22.229 rmmod nvme_keyring 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 78939 ']' 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 78939 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 78939 ']' 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 78939 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78939 00:22:22.229 killing process with pid 78939 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78939' 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 78939 00:22:22.229 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 78939 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:22.229 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:22:22.230 ************************************ 00:22:22.230 END TEST nvmf_initiator_timeout 00:22:22.230 ************************************ 00:22:22.230 00:22:22.230 real 1m5.937s 00:22:22.230 user 3m56.052s 00:22:22.230 sys 0m21.738s 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:22.230 ************************************ 00:22:22.230 START TEST nvmf_nsid 00:22:22.230 ************************************ 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:22.230 * Looking for test storage... 00:22:22.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:22.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.230 --rc genhtml_branch_coverage=1 00:22:22.230 --rc genhtml_function_coverage=1 00:22:22.230 --rc genhtml_legend=1 00:22:22.230 --rc geninfo_all_blocks=1 00:22:22.230 --rc geninfo_unexecuted_blocks=1 00:22:22.230 00:22:22.230 ' 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:22.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.230 --rc genhtml_branch_coverage=1 00:22:22.230 --rc genhtml_function_coverage=1 00:22:22.230 --rc genhtml_legend=1 00:22:22.230 --rc geninfo_all_blocks=1 00:22:22.230 --rc geninfo_unexecuted_blocks=1 00:22:22.230 00:22:22.230 ' 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:22.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.230 --rc genhtml_branch_coverage=1 00:22:22.230 --rc genhtml_function_coverage=1 00:22:22.230 --rc genhtml_legend=1 00:22:22.230 --rc geninfo_all_blocks=1 00:22:22.230 --rc geninfo_unexecuted_blocks=1 00:22:22.230 00:22:22.230 ' 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:22.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.230 --rc genhtml_branch_coverage=1 00:22:22.230 --rc genhtml_function_coverage=1 00:22:22.230 --rc genhtml_legend=1 00:22:22.230 --rc geninfo_all_blocks=1 00:22:22.230 --rc geninfo_unexecuted_blocks=1 00:22:22.230 00:22:22.230 ' 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.230 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:22.231 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:22.231 Cannot find device "nvmf_init_br" 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:22.231 Cannot find device "nvmf_init_br2" 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:22.231 Cannot find device "nvmf_tgt_br" 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:22.231 Cannot find device "nvmf_tgt_br2" 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:22.231 Cannot find device "nvmf_init_br" 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:22.231 Cannot find device "nvmf_init_br2" 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:22.231 Cannot find device "nvmf_tgt_br" 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:22.231 Cannot find device "nvmf_tgt_br2" 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:22.231 Cannot find device "nvmf_br" 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:22.231 Cannot find device "nvmf_init_if" 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:22.231 Cannot find device "nvmf_init_if2" 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:22.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:22.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:22.231 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:22.231 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:22.231 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:22.232 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:22.232 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:22.232 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:22.232 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:22.232 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:22.491 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:22.491 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:22:22.491 00:22:22.491 --- 10.0.0.3 ping statistics --- 00:22:22.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.491 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:22.491 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:22.491 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:22:22.491 00:22:22.491 --- 10.0.0.4 ping statistics --- 00:22:22.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.491 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:22.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:22.491 00:22:22.491 --- 10.0.0.1 ping statistics --- 00:22:22.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.491 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:22.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:22:22.491 00:22:22.491 --- 10.0.0.2 ping statistics --- 00:22:22.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.491 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:22.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=79891 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 79891 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 79891 ']' 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.491 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:22.491 [2024-12-10 11:25:29.231833] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:22:22.491 [2024-12-10 11:25:29.232178] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.751 [2024-12-10 11:25:29.435666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.751 [2024-12-10 11:25:29.560880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.751 [2024-12-10 11:25:29.561204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.751 [2024-12-10 11:25:29.561433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.751 [2024-12-10 11:25:29.561696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.751 [2024-12-10 11:25:29.561754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.751 [2024-12-10 11:25:29.563267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.010 [2024-12-10 11:25:29.784652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=79925 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=4b8b8466-99db-4dd9-833a-0d75af48a65c 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=e8c75210-a368-4a02-9c6b-c01a81808b12 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=4876c649-d368-4ba3-bf71-c56ef11a2aee 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:23.578 null0 00:22:23.578 null1 00:22:23.578 null2 00:22:23.578 [2024-12-10 11:25:30.269533] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.578 [2024-12-10 11:25:30.293764] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 79925 /var/tmp/tgt2.sock 00:22:23.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 79925 ']' 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.578 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:23.578 [2024-12-10 11:25:30.365703] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:22:23.578 [2024-12-10 11:25:30.365884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79925 ] 00:22:23.852 [2024-12-10 11:25:30.547994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.125 [2024-12-10 11:25:30.677089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.125 [2024-12-10 11:25:30.937669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:24.692 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.692 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:24.692 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:25.259 [2024-12-10 11:25:31.858867] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.259 [2024-12-10 11:25:31.875075] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:25.259 nvme0n1 nvme0n2 00:22:25.259 nvme1n1 00:22:25.259 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:25.259 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:25.259 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:22:25.259 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:25.259 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:25.259 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:25.259 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:25.259 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:25.259 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:25.259 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:25.259 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:25.259 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:25.259 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:25.259 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:25.259 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:25.259 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 4b8b8466-99db-4dd9-833a-0d75af48a65c 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4b8b846699db4dd9833a0d75af48a65c 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4B8B846699DB4DD9833A0D75AF48A65C 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 4B8B846699DB4DD9833A0D75AF48A65C == \4\B\8\B\8\4\6\6\9\9\D\B\4\D\D\9\8\3\3\A\0\D\7\5\A\F\4\8\A\6\5\C ]] 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid e8c75210-a368-4a02-9c6b-c01a81808b12 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e8c75210a3684a029c6bc01a81808b12 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E8C75210A3684A029C6BC01A81808B12 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ E8C75210A3684A029C6BC01A81808B12 == \E\8\C\7\5\2\1\0\A\3\6\8\4\A\0\2\9\C\6\B\C\0\1\A\8\1\8\0\8\B\1\2 ]] 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 4876c649-d368-4ba3-bf71-c56ef11a2aee 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4876c649d3684ba3bf71c56ef11a2aee 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4876C649D3684BA3BF71C56EF11A2AEE 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 4876C649D3684BA3BF71C56EF11A2AEE == \4\8\7\6\C\6\4\9\D\3\6\8\4\B\A\3\B\F\7\1\C\5\6\E\F\1\1\A\2\A\E\E ]] 00:22:26.637 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:26.896 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:26.896 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:26.896 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 79925 00:22:26.896 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 79925 ']' 00:22:26.896 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 79925 00:22:26.896 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:26.896 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.896 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79925 00:22:26.896 killing process with pid 79925 00:22:26.896 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:26.896 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:26.896 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79925' 00:22:26.896 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 79925 00:22:26.896 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 79925 00:22:28.800 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:28.800 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:28.800 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:29.058 rmmod nvme_tcp 00:22:29.058 rmmod nvme_fabrics 00:22:29.058 rmmod nvme_keyring 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 79891 ']' 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 79891 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 79891 ']' 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 79891 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79891 00:22:29.058 killing process with pid 79891 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79891' 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 79891 00:22:29.058 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 79891 00:22:29.994 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:29.994 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:29.994 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:29.994 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:29.994 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:29.994 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:29.994 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:29.994 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:29.994 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:29.994 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:30.253 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:30.253 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:30.253 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:30.253 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:30.253 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:30.253 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:30.253 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:30.253 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:30.253 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:30.253 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:30.253 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:30.253 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:30.253 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:30.253 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.253 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.253 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.253 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:22:30.253 00:22:30.253 real 0m8.651s 00:22:30.253 user 0m13.406s 00:22:30.253 sys 0m1.982s 00:22:30.253 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:30.253 ************************************ 00:22:30.253 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:30.253 END TEST nvmf_nsid 00:22:30.253 ************************************ 00:22:30.512 11:25:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:30.512 ************************************ 00:22:30.512 END TEST nvmf_target_extra 00:22:30.512 ************************************ 00:22:30.512 00:22:30.512 real 8m9.350s 00:22:30.512 user 19m46.360s 00:22:30.512 sys 1m56.001s 00:22:30.512 11:25:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:30.512 11:25:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:30.512 11:25:37 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:30.512 11:25:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:30.512 11:25:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:30.512 11:25:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:30.512 ************************************ 00:22:30.512 START TEST nvmf_host 00:22:30.512 ************************************ 00:22:30.512 11:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:30.512 * Looking for test storage... 00:22:30.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:22:30.512 11:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:30.512 11:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:30.512 11:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:30.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.772 --rc genhtml_branch_coverage=1 00:22:30.772 --rc genhtml_function_coverage=1 00:22:30.772 --rc genhtml_legend=1 00:22:30.772 --rc geninfo_all_blocks=1 00:22:30.772 --rc geninfo_unexecuted_blocks=1 00:22:30.772 00:22:30.772 ' 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:30.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.772 --rc genhtml_branch_coverage=1 00:22:30.772 --rc genhtml_function_coverage=1 00:22:30.772 --rc genhtml_legend=1 00:22:30.772 --rc geninfo_all_blocks=1 00:22:30.772 --rc geninfo_unexecuted_blocks=1 00:22:30.772 00:22:30.772 ' 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:30.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.772 --rc genhtml_branch_coverage=1 00:22:30.772 --rc genhtml_function_coverage=1 00:22:30.772 --rc genhtml_legend=1 00:22:30.772 --rc geninfo_all_blocks=1 00:22:30.772 --rc geninfo_unexecuted_blocks=1 00:22:30.772 00:22:30.772 ' 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:30.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.772 --rc genhtml_branch_coverage=1 00:22:30.772 --rc genhtml_function_coverage=1 00:22:30.772 --rc genhtml_legend=1 00:22:30.772 --rc geninfo_all_blocks=1 00:22:30.772 --rc geninfo_unexecuted_blocks=1 00:22:30.772 00:22:30.772 ' 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:30.772 11:25:37 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:30.773 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.773 ************************************ 00:22:30.773 START TEST nvmf_identify 00:22:30.773 ************************************ 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:30.773 * Looking for test storage... 00:22:30.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:30.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.773 --rc genhtml_branch_coverage=1 00:22:30.773 --rc genhtml_function_coverage=1 00:22:30.773 --rc genhtml_legend=1 00:22:30.773 --rc geninfo_all_blocks=1 00:22:30.773 --rc geninfo_unexecuted_blocks=1 00:22:30.773 00:22:30.773 ' 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:30.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.773 --rc genhtml_branch_coverage=1 00:22:30.773 --rc genhtml_function_coverage=1 00:22:30.773 --rc genhtml_legend=1 00:22:30.773 --rc geninfo_all_blocks=1 00:22:30.773 --rc geninfo_unexecuted_blocks=1 00:22:30.773 00:22:30.773 ' 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:30.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.773 --rc genhtml_branch_coverage=1 00:22:30.773 --rc genhtml_function_coverage=1 00:22:30.773 --rc genhtml_legend=1 00:22:30.773 --rc geninfo_all_blocks=1 00:22:30.773 --rc geninfo_unexecuted_blocks=1 00:22:30.773 00:22:30.773 ' 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:30.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.773 --rc genhtml_branch_coverage=1 00:22:30.773 --rc genhtml_function_coverage=1 00:22:30.773 --rc genhtml_legend=1 00:22:30.773 --rc geninfo_all_blocks=1 00:22:30.773 --rc geninfo_unexecuted_blocks=1 00:22:30.773 00:22:30.773 ' 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.773 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.032 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:22:31.032 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:22:31.032 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.032 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.032 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:31.032 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.032 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:31.032 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:31.032 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.032 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.032 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.032 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:31.033 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:31.033 Cannot find device "nvmf_init_br" 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:31.033 Cannot find device "nvmf_init_br2" 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:31.033 Cannot find device "nvmf_tgt_br" 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:31.033 Cannot find device "nvmf_tgt_br2" 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:31.033 Cannot find device "nvmf_init_br" 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:31.033 Cannot find device "nvmf_init_br2" 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:31.033 Cannot find device "nvmf_tgt_br" 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:31.033 Cannot find device "nvmf_tgt_br2" 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:31.033 Cannot find device "nvmf_br" 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:31.033 Cannot find device "nvmf_init_if" 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:31.033 Cannot find device "nvmf_init_if2" 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:31.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:31.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:31.033 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:31.292 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:31.292 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:22:31.292 00:22:31.292 --- 10.0.0.3 ping statistics --- 00:22:31.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.292 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:31.292 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:31.292 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:22:31.292 00:22:31.292 --- 10.0.0.4 ping statistics --- 00:22:31.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.292 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:22:31.292 11:25:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:31.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:31.292 00:22:31.292 --- 10.0.0.1 ping statistics --- 00:22:31.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.292 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:31.292 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:31.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:22:31.292 00:22:31.292 --- 10.0.0.2 ping statistics --- 00:22:31.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.292 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:22:31.292 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.292 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=80317 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 80317 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 80317 ']' 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.293 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.551 [2024-12-10 11:25:38.159078] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:22:31.551 [2024-12-10 11:25:38.159265] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.551 [2024-12-10 11:25:38.354777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.810 [2024-12-10 11:25:38.488206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.810 [2024-12-10 11:25:38.488293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.810 [2024-12-10 11:25:38.488329] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.810 [2024-12-10 11:25:38.488345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.810 [2024-12-10 11:25:38.488390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.810 [2024-12-10 11:25:38.490574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.810 [2024-12-10 11:25:38.490711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.810 [2024-12-10 11:25:38.490816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.810 [2024-12-10 11:25:38.490908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.068 [2024-12-10 11:25:38.681456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:32.335 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.335 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:32.335 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:32.335 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.335 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.335 [2024-12-10 11:25:39.123137] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.335 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.335 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:32.335 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.335 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.624 Malloc0 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.624 [2024-12-10 11:25:39.278302] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.624 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.624 [ 00:22:32.624 { 00:22:32.624 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:32.624 "subtype": "Discovery", 00:22:32.624 "listen_addresses": [ 00:22:32.624 { 00:22:32.624 "trtype": "TCP", 00:22:32.624 "adrfam": "IPv4", 00:22:32.624 "traddr": "10.0.0.3", 00:22:32.624 "trsvcid": "4420" 00:22:32.624 } 00:22:32.624 ], 00:22:32.625 "allow_any_host": true, 00:22:32.625 "hosts": [] 00:22:32.625 }, 00:22:32.625 { 00:22:32.625 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.625 "subtype": "NVMe", 00:22:32.625 "listen_addresses": [ 00:22:32.625 { 00:22:32.625 "trtype": "TCP", 00:22:32.625 "adrfam": "IPv4", 00:22:32.625 "traddr": "10.0.0.3", 00:22:32.625 "trsvcid": "4420" 00:22:32.625 } 00:22:32.625 ], 00:22:32.625 "allow_any_host": true, 00:22:32.625 "hosts": [], 00:22:32.625 "serial_number": "SPDK00000000000001", 00:22:32.625 "model_number": "SPDK bdev Controller", 00:22:32.625 "max_namespaces": 32, 00:22:32.625 "min_cntlid": 1, 00:22:32.625 "max_cntlid": 65519, 00:22:32.625 "namespaces": [ 00:22:32.625 { 00:22:32.625 "nsid": 1, 00:22:32.625 "bdev_name": "Malloc0", 00:22:32.625 "name": "Malloc0", 00:22:32.625 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:32.625 "eui64": "ABCDEF0123456789", 00:22:32.625 "uuid": "809aed85-baa1-4ace-806c-a26cf094a095" 00:22:32.625 } 00:22:32.625 ] 00:22:32.625 } 00:22:32.625 ] 00:22:32.625 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.625 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:32.625 [2024-12-10 11:25:39.365341] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:22:32.625 [2024-12-10 11:25:39.365522] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80352 ] 00:22:32.886 [2024-12-10 11:25:39.553999] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:32.886 [2024-12-10 11:25:39.554164] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:32.886 [2024-12-10 11:25:39.554180] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:32.886 [2024-12-10 11:25:39.554221] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:32.886 [2024-12-10 11:25:39.554237] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:32.886 [2024-12-10 11:25:39.558716] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:32.886 [2024-12-10 11:25:39.558832] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:22:32.886 [2024-12-10 11:25:39.566394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:32.886 [2024-12-10 11:25:39.566429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:32.886 [2024-12-10 11:25:39.566461] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:32.886 [2024-12-10 11:25:39.566468] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:32.886 [2024-12-10 11:25:39.566557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.886 [2024-12-10 11:25:39.566572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.886 [2024-12-10 11:25:39.566581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:32.886 [2024-12-10 11:25:39.566618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:32.886 [2024-12-10 11:25:39.566681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:32.886 [2024-12-10 11:25:39.574416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.886 [2024-12-10 11:25:39.574449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.886 [2024-12-10 11:25:39.574474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.886 [2024-12-10 11:25:39.574484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:32.886 [2024-12-10 11:25:39.574506] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:32.886 [2024-12-10 11:25:39.574523] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:32.886 [2024-12-10 11:25:39.574534] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:32.886 [2024-12-10 11:25:39.574563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.886 [2024-12-10 11:25:39.574573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.886 [2024-12-10 11:25:39.574580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:32.886 [2024-12-10 11:25:39.574597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.886 [2024-12-10 11:25:39.574634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:32.886 [2024-12-10 11:25:39.574747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.886 [2024-12-10 11:25:39.574770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.886 [2024-12-10 11:25:39.574778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.886 [2024-12-10 11:25:39.574786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:32.886 [2024-12-10 11:25:39.574803] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:32.886 [2024-12-10 11:25:39.574818] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:32.886 [2024-12-10 11:25:39.574832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.886 [2024-12-10 11:25:39.574840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.886 [2024-12-10 11:25:39.574848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:32.886 [2024-12-10 11:25:39.574867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.886 [2024-12-10 11:25:39.574900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:32.886 [2024-12-10 11:25:39.574973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.886 [2024-12-10 11:25:39.574986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.886 [2024-12-10 11:25:39.574992] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.886 [2024-12-10 11:25:39.575000] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:32.886 [2024-12-10 11:25:39.575011] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:32.886 [2024-12-10 11:25:39.575026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:32.886 [2024-12-10 11:25:39.575044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.886 [2024-12-10 11:25:39.575055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.886 [2024-12-10 11:25:39.575063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:32.886 [2024-12-10 11:25:39.575078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.886 [2024-12-10 11:25:39.575107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:32.886 [2024-12-10 11:25:39.575178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.886 [2024-12-10 11:25:39.575190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.886 [2024-12-10 11:25:39.575199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.886 [2024-12-10 11:25:39.575206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:32.886 [2024-12-10 11:25:39.575217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:32.886 [2024-12-10 11:25:39.575235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.886 [2024-12-10 11:25:39.575244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.575251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:32.887 [2024-12-10 11:25:39.575265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.887 [2024-12-10 11:25:39.575297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:32.887 [2024-12-10 11:25:39.575362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.887 [2024-12-10 11:25:39.575374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.887 [2024-12-10 11:25:39.575396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.575405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:32.887 [2024-12-10 11:25:39.575416] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:32.887 [2024-12-10 11:25:39.575426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:32.887 [2024-12-10 11:25:39.575440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:32.887 [2024-12-10 11:25:39.575551] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:32.887 [2024-12-10 11:25:39.575560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:32.887 [2024-12-10 11:25:39.575575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.575584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.575596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:32.887 [2024-12-10 11:25:39.575611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.887 [2024-12-10 11:25:39.575655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:32.887 [2024-12-10 11:25:39.575723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.887 [2024-12-10 11:25:39.575736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.887 [2024-12-10 11:25:39.575742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.575749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:32.887 [2024-12-10 11:25:39.575760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:32.887 [2024-12-10 11:25:39.575784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.575794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.575801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:32.887 [2024-12-10 11:25:39.575815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.887 [2024-12-10 11:25:39.575844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:32.887 [2024-12-10 11:25:39.575928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.887 [2024-12-10 11:25:39.575941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.887 [2024-12-10 11:25:39.575947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.575954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:32.887 [2024-12-10 11:25:39.575963] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:32.887 [2024-12-10 11:25:39.575973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:32.887 [2024-12-10 11:25:39.576000] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:32.887 [2024-12-10 11:25:39.576022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:32.887 [2024-12-10 11:25:39.576043] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:32.887 [2024-12-10 11:25:39.576067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.887 [2024-12-10 11:25:39.576099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:32.887 [2024-12-10 11:25:39.576226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.887 [2024-12-10 11:25:39.576241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.887 [2024-12-10 11:25:39.576248] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576257] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:22:32.887 [2024-12-10 11:25:39.576265] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:32.887 [2024-12-10 11:25:39.576274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576288] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576296] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.887 [2024-12-10 11:25:39.576324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.887 [2024-12-10 11:25:39.576330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:32.887 [2024-12-10 11:25:39.576371] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:32.887 [2024-12-10 11:25:39.576388] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:32.887 [2024-12-10 11:25:39.576397] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:32.887 [2024-12-10 11:25:39.576407] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:32.887 [2024-12-10 11:25:39.576417] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:32.887 [2024-12-10 11:25:39.576426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:32.887 [2024-12-10 11:25:39.576445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:32.887 [2024-12-10 11:25:39.576459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:32.887 [2024-12-10 11:25:39.576494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:32.887 [2024-12-10 11:25:39.576526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:32.887 [2024-12-10 11:25:39.576602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.887 [2024-12-10 11:25:39.576616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.887 [2024-12-10 11:25:39.576623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:32.887 [2024-12-10 11:25:39.576644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:32.887 [2024-12-10 11:25:39.576678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.887 [2024-12-10 11:25:39.576692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:22:32.887 [2024-12-10 11:25:39.576717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.887 [2024-12-10 11:25:39.576726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:22:32.887 [2024-12-10 11:25:39.576750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.887 [2024-12-10 11:25:39.576759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.887 [2024-12-10 11:25:39.576783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.887 [2024-12-10 11:25:39.576795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:32.887 [2024-12-10 11:25:39.576816] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:32.887 [2024-12-10 11:25:39.576829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.576839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:32.887 [2024-12-10 11:25:39.576853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.887 [2024-12-10 11:25:39.576885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:32.887 [2024-12-10 11:25:39.576897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:22:32.887 [2024-12-10 11:25:39.576905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:22:32.887 [2024-12-10 11:25:39.576913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.887 [2024-12-10 11:25:39.576921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:32.887 [2024-12-10 11:25:39.577030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.887 [2024-12-10 11:25:39.577042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.887 [2024-12-10 11:25:39.577049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.887 [2024-12-10 11:25:39.577056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:32.887 [2024-12-10 11:25:39.577066] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:32.888 [2024-12-10 11:25:39.577077] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:32.888 [2024-12-10 11:25:39.577104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.577114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:32.888 [2024-12-10 11:25:39.577128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.888 [2024-12-10 11:25:39.577156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:32.888 [2024-12-10 11:25:39.577250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.888 [2024-12-10 11:25:39.577276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.888 [2024-12-10 11:25:39.577291] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.577299] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:32.888 [2024-12-10 11:25:39.577308] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:32.888 [2024-12-10 11:25:39.577316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.577330] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.577338] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.577364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.888 [2024-12-10 11:25:39.577381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.888 [2024-12-10 11:25:39.577391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.577400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:32.888 [2024-12-10 11:25:39.577428] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:32.888 [2024-12-10 11:25:39.577495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.577510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:32.888 [2024-12-10 11:25:39.577525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.888 [2024-12-10 11:25:39.577539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.577547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.577554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:32.888 [2024-12-10 11:25:39.577570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.888 [2024-12-10 11:25:39.577613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:32.888 [2024-12-10 11:25:39.577629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:32.888 [2024-12-10 11:25:39.577909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.888 [2024-12-10 11:25:39.577935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.888 [2024-12-10 11:25:39.577953] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.577965] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:22:32.888 [2024-12-10 11:25:39.577973] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:22:32.888 [2024-12-10 11:25:39.577981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.577994] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.578002] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.578012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.888 [2024-12-10 11:25:39.578021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.888 [2024-12-10 11:25:39.578033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.578041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:32.888 [2024-12-10 11:25:39.578072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.888 [2024-12-10 11:25:39.578085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.888 [2024-12-10 11:25:39.578091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.578098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:32.888 [2024-12-10 11:25:39.578135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.578148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:32.888 [2024-12-10 11:25:39.578163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.888 [2024-12-10 11:25:39.578216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:32.888 [2024-12-10 11:25:39.578332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.888 [2024-12-10 11:25:39.578344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.888 [2024-12-10 11:25:39.582378] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.582408] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:22:32.888 [2024-12-10 11:25:39.582417] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:22:32.888 [2024-12-10 11:25:39.582425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.582441] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.582448] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.582464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.888 [2024-12-10 11:25:39.582483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.888 [2024-12-10 11:25:39.582491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.582499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:32.888 [2024-12-10 11:25:39.582525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.582535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:32.888 [2024-12-10 11:25:39.582551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.888 [2024-12-10 11:25:39.582594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:32.888 [2024-12-10 11:25:39.582724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:32.888 [2024-12-10 11:25:39.582736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:32.888 [2024-12-10 11:25:39.582745] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.582753] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:22:32.888 [2024-12-10 11:25:39.582761] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:22:32.888 [2024-12-10 11:25:39.582768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.582780] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.582787] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.582811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.888 [2024-12-10 11:25:39.582824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.888 [2024-12-10 11:25:39.582830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.888 [2024-12-10 11:25:39.582837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:32.888 ===================================================== 00:22:32.888 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:32.888 ===================================================== 00:22:32.888 Controller Capabilities/Features 00:22:32.888 ================================ 00:22:32.888 Vendor ID: 0000 00:22:32.888 Subsystem Vendor ID: 0000 00:22:32.888 Serial Number: .................... 00:22:32.888 Model Number: ........................................ 00:22:32.888 Firmware Version: 25.01 00:22:32.888 Recommended Arb Burst: 0 00:22:32.889 IEEE OUI Identifier: 00 00 00 00:22:32.889 Multi-path I/O 00:22:32.889 May have multiple subsystem ports: No 00:22:32.889 May have multiple controllers: No 00:22:32.889 Associated with SR-IOV VF: No 00:22:32.889 Max Data Transfer Size: 131072 00:22:32.889 Max Number of Namespaces: 0 00:22:32.889 Max Number of I/O Queues: 1024 00:22:32.889 NVMe Specification Version (VS): 1.3 00:22:32.889 NVMe Specification Version (Identify): 1.3 00:22:32.889 Maximum Queue Entries: 128 00:22:32.889 Contiguous Queues Required: Yes 00:22:32.889 Arbitration Mechanisms Supported 00:22:32.889 Weighted Round Robin: Not Supported 00:22:32.889 Vendor Specific: Not Supported 00:22:32.889 Reset Timeout: 15000 ms 00:22:32.889 Doorbell Stride: 4 bytes 00:22:32.889 NVM Subsystem Reset: Not Supported 00:22:32.889 Command Sets Supported 00:22:32.889 NVM Command Set: Supported 00:22:32.889 Boot Partition: Not Supported 00:22:32.889 Memory Page Size Minimum: 4096 bytes 00:22:32.889 Memory Page Size Maximum: 4096 bytes 00:22:32.889 Persistent Memory Region: Not Supported 00:22:32.889 Optional Asynchronous Events Supported 00:22:32.889 Namespace Attribute Notices: Not Supported 00:22:32.889 Firmware Activation Notices: Not Supported 00:22:32.889 ANA Change Notices: Not Supported 00:22:32.889 PLE Aggregate Log Change Notices: Not Supported 00:22:32.889 LBA Status Info Alert Notices: Not Supported 00:22:32.889 EGE Aggregate Log Change Notices: Not Supported 00:22:32.889 Normal NVM Subsystem Shutdown event: Not Supported 00:22:32.889 Zone Descriptor Change Notices: Not Supported 00:22:32.889 Discovery Log Change Notices: Supported 00:22:32.889 Controller Attributes 00:22:32.889 128-bit Host Identifier: Not Supported 00:22:32.889 Non-Operational Permissive Mode: Not Supported 00:22:32.889 NVM Sets: Not Supported 00:22:32.889 Read Recovery Levels: Not Supported 00:22:32.889 Endurance Groups: Not Supported 00:22:32.889 Predictable Latency Mode: Not Supported 00:22:32.889 Traffic Based Keep ALive: Not Supported 00:22:32.889 Namespace Granularity: Not Supported 00:22:32.889 SQ Associations: Not Supported 00:22:32.889 UUID List: Not Supported 00:22:32.889 Multi-Domain Subsystem: Not Supported 00:22:32.889 Fixed Capacity Management: Not Supported 00:22:32.889 Variable Capacity Management: Not Supported 00:22:32.889 Delete Endurance Group: Not Supported 00:22:32.889 Delete NVM Set: Not Supported 00:22:32.889 Extended LBA Formats Supported: Not Supported 00:22:32.889 Flexible Data Placement Supported: Not Supported 00:22:32.889 00:22:32.889 Controller Memory Buffer Support 00:22:32.889 ================================ 00:22:32.889 Supported: No 00:22:32.889 00:22:32.889 Persistent Memory Region Support 00:22:32.889 ================================ 00:22:32.889 Supported: No 00:22:32.889 00:22:32.889 Admin Command Set Attributes 00:22:32.889 ============================ 00:22:32.889 Security Send/Receive: Not Supported 00:22:32.889 Format NVM: Not Supported 00:22:32.889 Firmware Activate/Download: Not Supported 00:22:32.889 Namespace Management: Not Supported 00:22:32.889 Device Self-Test: Not Supported 00:22:32.889 Directives: Not Supported 00:22:32.889 NVMe-MI: Not Supported 00:22:32.889 Virtualization Management: Not Supported 00:22:32.889 Doorbell Buffer Config: Not Supported 00:22:32.889 Get LBA Status Capability: Not Supported 00:22:32.889 Command & Feature Lockdown Capability: Not Supported 00:22:32.889 Abort Command Limit: 1 00:22:32.889 Async Event Request Limit: 4 00:22:32.889 Number of Firmware Slots: N/A 00:22:32.889 Firmware Slot 1 Read-Only: N/A 00:22:32.889 Firmware Activation Without Reset: N/A 00:22:32.889 Multiple Update Detection Support: N/A 00:22:32.889 Firmware Update Granularity: No Information Provided 00:22:32.889 Per-Namespace SMART Log: No 00:22:32.889 Asymmetric Namespace Access Log Page: Not Supported 00:22:32.889 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:32.889 Command Effects Log Page: Not Supported 00:22:32.889 Get Log Page Extended Data: Supported 00:22:32.889 Telemetry Log Pages: Not Supported 00:22:32.889 Persistent Event Log Pages: Not Supported 00:22:32.889 Supported Log Pages Log Page: May Support 00:22:32.889 Commands Supported & Effects Log Page: Not Supported 00:22:32.889 Feature Identifiers & Effects Log Page:May Support 00:22:32.889 NVMe-MI Commands & Effects Log Page: May Support 00:22:32.889 Data Area 4 for Telemetry Log: Not Supported 00:22:32.889 Error Log Page Entries Supported: 128 00:22:32.889 Keep Alive: Not Supported 00:22:32.889 00:22:32.889 NVM Command Set Attributes 00:22:32.889 ========================== 00:22:32.889 Submission Queue Entry Size 00:22:32.889 Max: 1 00:22:32.889 Min: 1 00:22:32.889 Completion Queue Entry Size 00:22:32.889 Max: 1 00:22:32.889 Min: 1 00:22:32.889 Number of Namespaces: 0 00:22:32.889 Compare Command: Not Supported 00:22:32.889 Write Uncorrectable Command: Not Supported 00:22:32.889 Dataset Management Command: Not Supported 00:22:32.889 Write Zeroes Command: Not Supported 00:22:32.889 Set Features Save Field: Not Supported 00:22:32.889 Reservations: Not Supported 00:22:32.889 Timestamp: Not Supported 00:22:32.889 Copy: Not Supported 00:22:32.889 Volatile Write Cache: Not Present 00:22:32.889 Atomic Write Unit (Normal): 1 00:22:32.889 Atomic Write Unit (PFail): 1 00:22:32.889 Atomic Compare & Write Unit: 1 00:22:32.889 Fused Compare & Write: Supported 00:22:32.889 Scatter-Gather List 00:22:32.889 SGL Command Set: Supported 00:22:32.889 SGL Keyed: Supported 00:22:32.889 SGL Bit Bucket Descriptor: Not Supported 00:22:32.889 SGL Metadata Pointer: Not Supported 00:22:32.889 Oversized SGL: Not Supported 00:22:32.889 SGL Metadata Address: Not Supported 00:22:32.889 SGL Offset: Supported 00:22:32.889 Transport SGL Data Block: Not Supported 00:22:32.889 Replay Protected Memory Block: Not Supported 00:22:32.889 00:22:32.889 Firmware Slot Information 00:22:32.889 ========================= 00:22:32.889 Active slot: 0 00:22:32.889 00:22:32.889 00:22:32.889 Error Log 00:22:32.889 ========= 00:22:32.889 00:22:32.889 Active Namespaces 00:22:32.889 ================= 00:22:32.889 Discovery Log Page 00:22:32.889 ================== 00:22:32.889 Generation Counter: 2 00:22:32.889 Number of Records: 2 00:22:32.889 Record Format: 0 00:22:32.889 00:22:32.889 Discovery Log Entry 0 00:22:32.889 ---------------------- 00:22:32.889 Transport Type: 3 (TCP) 00:22:32.889 Address Family: 1 (IPv4) 00:22:32.889 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:32.889 Entry Flags: 00:22:32.889 Duplicate Returned Information: 1 00:22:32.889 Explicit Persistent Connection Support for Discovery: 1 00:22:32.889 Transport Requirements: 00:22:32.889 Secure Channel: Not Required 00:22:32.889 Port ID: 0 (0x0000) 00:22:32.889 Controller ID: 65535 (0xffff) 00:22:32.889 Admin Max SQ Size: 128 00:22:32.889 Transport Service Identifier: 4420 00:22:32.889 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:32.889 Transport Address: 10.0.0.3 00:22:32.889 Discovery Log Entry 1 00:22:32.889 ---------------------- 00:22:32.889 Transport Type: 3 (TCP) 00:22:32.889 Address Family: 1 (IPv4) 00:22:32.889 Subsystem Type: 2 (NVM Subsystem) 00:22:32.889 Entry Flags: 00:22:32.889 Duplicate Returned Information: 0 00:22:32.889 Explicit Persistent Connection Support for Discovery: 0 00:22:32.889 Transport Requirements: 00:22:32.889 Secure Channel: Not Required 00:22:32.889 Port ID: 0 (0x0000) 00:22:32.889 Controller ID: 65535 (0xffff) 00:22:32.889 Admin Max SQ Size: 128 00:22:32.889 Transport Service Identifier: 4420 00:22:32.889 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:32.889 Transport Address: 10.0.0.3 [2024-12-10 11:25:39.583064] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:32.889 [2024-12-10 11:25:39.583095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:32.889 [2024-12-10 11:25:39.583111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.889 [2024-12-10 11:25:39.583122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:22:32.889 [2024-12-10 11:25:39.583132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.889 [2024-12-10 11:25:39.583140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:22:32.889 [2024-12-10 11:25:39.583149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.889 [2024-12-10 11:25:39.583157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.889 [2024-12-10 11:25:39.583166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.889 [2024-12-10 11:25:39.583182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.889 [2024-12-10 11:25:39.583191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.583198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.890 [2024-12-10 11:25:39.583218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.890 [2024-12-10 11:25:39.583255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.890 [2024-12-10 11:25:39.583336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.890 [2024-12-10 11:25:39.583365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.890 [2024-12-10 11:25:39.583375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.583384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.890 [2024-12-10 11:25:39.583399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.583408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.583421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.890 [2024-12-10 11:25:39.583441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.890 [2024-12-10 11:25:39.583477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.890 [2024-12-10 11:25:39.583618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.890 [2024-12-10 11:25:39.583632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.890 [2024-12-10 11:25:39.583638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.583656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.890 [2024-12-10 11:25:39.583667] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:32.890 [2024-12-10 11:25:39.583676] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:32.890 [2024-12-10 11:25:39.583694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.583704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.583711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.890 [2024-12-10 11:25:39.583729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.890 [2024-12-10 11:25:39.583767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.890 [2024-12-10 11:25:39.583833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.890 [2024-12-10 11:25:39.583847] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.890 [2024-12-10 11:25:39.583854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.583861] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.890 [2024-12-10 11:25:39.583880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.583894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.583901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.890 [2024-12-10 11:25:39.583914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.890 [2024-12-10 11:25:39.583941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.890 [2024-12-10 11:25:39.584010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.890 [2024-12-10 11:25:39.584027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.890 [2024-12-10 11:25:39.584034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.890 [2024-12-10 11:25:39.584062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.890 [2024-12-10 11:25:39.584090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.890 [2024-12-10 11:25:39.584117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.890 [2024-12-10 11:25:39.584183] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.890 [2024-12-10 11:25:39.584196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.890 [2024-12-10 11:25:39.584202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.890 [2024-12-10 11:25:39.584227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.890 [2024-12-10 11:25:39.584254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.890 [2024-12-10 11:25:39.584281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.890 [2024-12-10 11:25:39.584402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.890 [2024-12-10 11:25:39.584431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.890 [2024-12-10 11:25:39.584439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.890 [2024-12-10 11:25:39.584470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.890 [2024-12-10 11:25:39.584503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.890 [2024-12-10 11:25:39.584532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.890 [2024-12-10 11:25:39.584617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.890 [2024-12-10 11:25:39.584636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.890 [2024-12-10 11:25:39.584643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.890 [2024-12-10 11:25:39.584668] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.890 [2024-12-10 11:25:39.584696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.890 [2024-12-10 11:25:39.584723] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.890 [2024-12-10 11:25:39.584789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.890 [2024-12-10 11:25:39.584801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.890 [2024-12-10 11:25:39.584807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584814] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.890 [2024-12-10 11:25:39.584835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.890 [2024-12-10 11:25:39.584863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.890 [2024-12-10 11:25:39.584889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.890 [2024-12-10 11:25:39.584955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.890 [2024-12-10 11:25:39.584967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.890 [2024-12-10 11:25:39.584973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.584980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.890 [2024-12-10 11:25:39.584998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.585006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.585013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.890 [2024-12-10 11:25:39.585025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.890 [2024-12-10 11:25:39.585055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.890 [2024-12-10 11:25:39.585115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.890 [2024-12-10 11:25:39.585127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.890 [2024-12-10 11:25:39.585133] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.585140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.890 [2024-12-10 11:25:39.585157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.585168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.585175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.890 [2024-12-10 11:25:39.585190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.890 [2024-12-10 11:25:39.585217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.890 [2024-12-10 11:25:39.585279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.890 [2024-12-10 11:25:39.585291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.890 [2024-12-10 11:25:39.585297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.585304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.890 [2024-12-10 11:25:39.585324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.585333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.890 [2024-12-10 11:25:39.585340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.891 [2024-12-10 11:25:39.585365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.891 [2024-12-10 11:25:39.585395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.891 [2024-12-10 11:25:39.585457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.891 [2024-12-10 11:25:39.585474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.891 [2024-12-10 11:25:39.585481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.585490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.891 [2024-12-10 11:25:39.585509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.585518] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.585524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.891 [2024-12-10 11:25:39.585537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.891 [2024-12-10 11:25:39.585567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.891 [2024-12-10 11:25:39.585628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.891 [2024-12-10 11:25:39.585643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.891 [2024-12-10 11:25:39.585649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.585656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.891 [2024-12-10 11:25:39.585674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.585682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.585689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.891 [2024-12-10 11:25:39.585702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.891 [2024-12-10 11:25:39.585732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.891 [2024-12-10 11:25:39.585795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.891 [2024-12-10 11:25:39.585820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.891 [2024-12-10 11:25:39.585828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.585835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.891 [2024-12-10 11:25:39.585854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.585862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.585869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.891 [2024-12-10 11:25:39.585882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.891 [2024-12-10 11:25:39.585909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.891 [2024-12-10 11:25:39.585974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.891 [2024-12-10 11:25:39.585992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.891 [2024-12-10 11:25:39.585999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.586007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.891 [2024-12-10 11:25:39.586025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.586034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.586040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.891 [2024-12-10 11:25:39.586053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.891 [2024-12-10 11:25:39.586084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.891 [2024-12-10 11:25:39.586142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.891 [2024-12-10 11:25:39.586159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.891 [2024-12-10 11:25:39.586166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.586173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.891 [2024-12-10 11:25:39.586192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.586201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.586211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.891 [2024-12-10 11:25:39.586225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.891 [2024-12-10 11:25:39.586252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.891 [2024-12-10 11:25:39.586324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.891 [2024-12-10 11:25:39.586336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.891 [2024-12-10 11:25:39.586342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.590369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.891 [2024-12-10 11:25:39.590427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.590438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.590445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:32.891 [2024-12-10 11:25:39.590460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.891 [2024-12-10 11:25:39.590494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:32.891 [2024-12-10 11:25:39.590570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:32.891 [2024-12-10 11:25:39.590585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:32.891 [2024-12-10 11:25:39.590592] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:32.891 [2024-12-10 11:25:39.590600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:32.891 [2024-12-10 11:25:39.590615] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:22:32.891 00:22:32.891 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:33.153 [2024-12-10 11:25:39.711882] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:22:33.153 [2024-12-10 11:25:39.712008] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80362 ] 00:22:33.153 [2024-12-10 11:25:39.902220] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:33.153 [2024-12-10 11:25:39.902414] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:33.153 [2024-12-10 11:25:39.902431] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:33.153 [2024-12-10 11:25:39.902459] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:33.153 [2024-12-10 11:25:39.902475] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:33.153 [2024-12-10 11:25:39.902884] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:33.153 [2024-12-10 11:25:39.902967] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:22:33.153 [2024-12-10 11:25:39.915380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:33.153 [2024-12-10 11:25:39.915414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:33.153 [2024-12-10 11:25:39.915445] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:33.153 [2024-12-10 11:25:39.915452] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:33.153 [2024-12-10 11:25:39.915537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.153 [2024-12-10 11:25:39.915553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.153 [2024-12-10 11:25:39.915561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.153 [2024-12-10 11:25:39.915588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:33.153 [2024-12-10 11:25:39.915632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.153 [2024-12-10 11:25:39.922408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.153 [2024-12-10 11:25:39.922440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.153 [2024-12-10 11:25:39.922465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.153 [2024-12-10 11:25:39.922474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.153 [2024-12-10 11:25:39.922497] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:33.153 [2024-12-10 11:25:39.922514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:33.153 [2024-12-10 11:25:39.922526] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:33.153 [2024-12-10 11:25:39.922555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.153 [2024-12-10 11:25:39.922565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.153 [2024-12-10 11:25:39.922573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.153 [2024-12-10 11:25:39.922590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.153 [2024-12-10 11:25:39.922627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.153 [2024-12-10 11:25:39.922736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.153 [2024-12-10 11:25:39.922750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.153 [2024-12-10 11:25:39.922757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.153 [2024-12-10 11:25:39.922765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.153 [2024-12-10 11:25:39.922780] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:33.153 [2024-12-10 11:25:39.922795] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:33.153 [2024-12-10 11:25:39.922809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.153 [2024-12-10 11:25:39.922818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.153 [2024-12-10 11:25:39.922826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.153 [2024-12-10 11:25:39.922844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.153 [2024-12-10 11:25:39.922874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.153 [2024-12-10 11:25:39.922950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.153 [2024-12-10 11:25:39.922962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.153 [2024-12-10 11:25:39.922968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.153 [2024-12-10 11:25:39.922976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.153 [2024-12-10 11:25:39.922987] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:33.153 [2024-12-10 11:25:39.923002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:33.153 [2024-12-10 11:25:39.923015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.153 [2024-12-10 11:25:39.923042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.153 [2024-12-10 11:25:39.923050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.153 [2024-12-10 11:25:39.923064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.153 [2024-12-10 11:25:39.923096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.153 [2024-12-10 11:25:39.923161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.153 [2024-12-10 11:25:39.923173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.153 [2024-12-10 11:25:39.923179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.153 [2024-12-10 11:25:39.923186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.153 [2024-12-10 11:25:39.923201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:33.153 [2024-12-10 11:25:39.923219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.153 [2024-12-10 11:25:39.923229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.923236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.154 [2024-12-10 11:25:39.923249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.154 [2024-12-10 11:25:39.923276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.154 [2024-12-10 11:25:39.923338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.154 [2024-12-10 11:25:39.923355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.154 [2024-12-10 11:25:39.923362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.923369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.154 [2024-12-10 11:25:39.923379] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:33.154 [2024-12-10 11:25:39.923409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:33.154 [2024-12-10 11:25:39.923424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:33.154 [2024-12-10 11:25:39.923534] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:33.154 [2024-12-10 11:25:39.923543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:33.154 [2024-12-10 11:25:39.923559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.923567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.923575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.154 [2024-12-10 11:25:39.923589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.154 [2024-12-10 11:25:39.923622] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.154 [2024-12-10 11:25:39.923700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.154 [2024-12-10 11:25:39.923714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.154 [2024-12-10 11:25:39.923720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.923727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.154 [2024-12-10 11:25:39.923738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:33.154 [2024-12-10 11:25:39.923761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.923771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.923783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.154 [2024-12-10 11:25:39.923798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.154 [2024-12-10 11:25:39.923827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.154 [2024-12-10 11:25:39.923896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.154 [2024-12-10 11:25:39.923908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.154 [2024-12-10 11:25:39.923914] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.923921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.154 [2024-12-10 11:25:39.923935] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:33.154 [2024-12-10 11:25:39.923946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:33.154 [2024-12-10 11:25:39.923971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:33.154 [2024-12-10 11:25:39.923991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:33.154 [2024-12-10 11:25:39.924012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.154 [2024-12-10 11:25:39.924040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.154 [2024-12-10 11:25:39.924071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.154 [2024-12-10 11:25:39.924228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:33.154 [2024-12-10 11:25:39.924251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:33.154 [2024-12-10 11:25:39.924259] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924270] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:22:33.154 [2024-12-10 11:25:39.924280] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:33.154 [2024-12-10 11:25:39.924288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924303] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924311] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.154 [2024-12-10 11:25:39.924335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.154 [2024-12-10 11:25:39.924341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.154 [2024-12-10 11:25:39.924383] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:33.154 [2024-12-10 11:25:39.924395] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:33.154 [2024-12-10 11:25:39.924410] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:33.154 [2024-12-10 11:25:39.924419] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:33.154 [2024-12-10 11:25:39.924428] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:33.154 [2024-12-10 11:25:39.924437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:33.154 [2024-12-10 11:25:39.924456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:33.154 [2024-12-10 11:25:39.924470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.154 [2024-12-10 11:25:39.924501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:33.154 [2024-12-10 11:25:39.924534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.154 [2024-12-10 11:25:39.924610] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.154 [2024-12-10 11:25:39.924621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.154 [2024-12-10 11:25:39.924628] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.154 [2024-12-10 11:25:39.924649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.154 [2024-12-10 11:25:39.924688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.154 [2024-12-10 11:25:39.924699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:22:33.154 [2024-12-10 11:25:39.924727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.154 [2024-12-10 11:25:39.924737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:22:33.154 [2024-12-10 11:25:39.924760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.154 [2024-12-10 11:25:39.924770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.154 [2024-12-10 11:25:39.924794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.154 [2024-12-10 11:25:39.924803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:33.154 [2024-12-10 11:25:39.924826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:33.154 [2024-12-10 11:25:39.924838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.924846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:33.154 [2024-12-10 11:25:39.924859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.154 [2024-12-10 11:25:39.924890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.154 [2024-12-10 11:25:39.924902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:22:33.154 [2024-12-10 11:25:39.924910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:22:33.154 [2024-12-10 11:25:39.924917] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.154 [2024-12-10 11:25:39.924925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:33.154 [2024-12-10 11:25:39.925033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.154 [2024-12-10 11:25:39.925045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.154 [2024-12-10 11:25:39.925051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.154 [2024-12-10 11:25:39.925058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:33.154 [2024-12-10 11:25:39.925069] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:33.154 [2024-12-10 11:25:39.925083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.925099] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.925111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.925125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.925133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.925140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:33.155 [2024-12-10 11:25:39.925164] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:33.155 [2024-12-10 11:25:39.925194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:33.155 [2024-12-10 11:25:39.925260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.155 [2024-12-10 11:25:39.925272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.155 [2024-12-10 11:25:39.925281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.925288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:33.155 [2024-12-10 11:25:39.925396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.925422] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.925442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.925464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:33.155 [2024-12-10 11:25:39.925479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.155 [2024-12-10 11:25:39.925512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:33.155 [2024-12-10 11:25:39.925613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:33.155 [2024-12-10 11:25:39.925625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:33.155 [2024-12-10 11:25:39.925632] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.925639] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:33.155 [2024-12-10 11:25:39.925647] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:33.155 [2024-12-10 11:25:39.925655] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.925671] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.925678] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.925696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.155 [2024-12-10 11:25:39.925706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.155 [2024-12-10 11:25:39.925712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.925719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:33.155 [2024-12-10 11:25:39.925758] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:33.155 [2024-12-10 11:25:39.925781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.925806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.925824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.925833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:33.155 [2024-12-10 11:25:39.925850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.155 [2024-12-10 11:25:39.925885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:33.155 [2024-12-10 11:25:39.925989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:33.155 [2024-12-10 11:25:39.926008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:33.155 [2024-12-10 11:25:39.926018] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.926026] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:33.155 [2024-12-10 11:25:39.926034] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:33.155 [2024-12-10 11:25:39.926042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.926054] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.926061] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.926074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.155 [2024-12-10 11:25:39.926083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.155 [2024-12-10 11:25:39.926089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.926097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:33.155 [2024-12-10 11:25:39.926135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.926164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.926183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.926191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:33.155 [2024-12-10 11:25:39.926206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.155 [2024-12-10 11:25:39.926236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:33.155 [2024-12-10 11:25:39.926327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:33.155 [2024-12-10 11:25:39.926339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:33.155 [2024-12-10 11:25:39.926345] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.930382] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:33.155 [2024-12-10 11:25:39.930396] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:33.155 [2024-12-10 11:25:39.930404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.930425] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.930433] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.930458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.155 [2024-12-10 11:25:39.930473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.155 [2024-12-10 11:25:39.930480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.930487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:33.155 [2024-12-10 11:25:39.930523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.930542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.930573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.930584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.930594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.930603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.930613] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:33.155 [2024-12-10 11:25:39.930624] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:33.155 [2024-12-10 11:25:39.930634] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:33.155 [2024-12-10 11:25:39.930672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.930683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:33.155 [2024-12-10 11:25:39.930703] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.155 [2024-12-10 11:25:39.930716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.930725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.930732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:33.155 [2024-12-10 11:25:39.930749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.155 [2024-12-10 11:25:39.930787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:33.155 [2024-12-10 11:25:39.930800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:33.155 [2024-12-10 11:25:39.930892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.155 [2024-12-10 11:25:39.930919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.155 [2024-12-10 11:25:39.930928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.930936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:33.155 [2024-12-10 11:25:39.930949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.155 [2024-12-10 11:25:39.930963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.155 [2024-12-10 11:25:39.930970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.930977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:33.155 [2024-12-10 11:25:39.930995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.931003] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:33.155 [2024-12-10 11:25:39.931016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.155 [2024-12-10 11:25:39.931046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:33.155 [2024-12-10 11:25:39.931116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.155 [2024-12-10 11:25:39.931133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.155 [2024-12-10 11:25:39.931140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.155 [2024-12-10 11:25:39.931147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:33.155 [2024-12-10 11:25:39.931164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.931172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:33.156 [2024-12-10 11:25:39.931185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.156 [2024-12-10 11:25:39.931214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:33.156 [2024-12-10 11:25:39.931283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.156 [2024-12-10 11:25:39.931295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.156 [2024-12-10 11:25:39.931301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.931308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:33.156 [2024-12-10 11:25:39.931324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.931332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:33.156 [2024-12-10 11:25:39.931365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.156 [2024-12-10 11:25:39.931401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:33.156 [2024-12-10 11:25:39.931468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.156 [2024-12-10 11:25:39.931483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.156 [2024-12-10 11:25:39.931491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.931498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:33.156 [2024-12-10 11:25:39.931530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.931541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:33.156 [2024-12-10 11:25:39.931555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.156 [2024-12-10 11:25:39.931569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.931578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:33.156 [2024-12-10 11:25:39.931590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.156 [2024-12-10 11:25:39.931603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.931615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:22:33.156 [2024-12-10 11:25:39.931630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.156 [2024-12-10 11:25:39.931656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.931667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:22:33.156 [2024-12-10 11:25:39.931679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.156 [2024-12-10 11:25:39.931714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:33.156 [2024-12-10 11:25:39.931728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:33.156 [2024-12-10 11:25:39.931736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:22:33.156 [2024-12-10 11:25:39.931744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:22:33.156 [2024-12-10 11:25:39.931925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:33.156 [2024-12-10 11:25:39.931948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:33.156 [2024-12-10 11:25:39.931957] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.931964] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:22:33.156 [2024-12-10 11:25:39.931981] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:22:33.156 [2024-12-10 11:25:39.931990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932025] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932034] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:33.156 [2024-12-10 11:25:39.932053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:33.156 [2024-12-10 11:25:39.932060] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932066] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:22:33.156 [2024-12-10 11:25:39.932074] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:22:33.156 [2024-12-10 11:25:39.932084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932095] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932102] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:33.156 [2024-12-10 11:25:39.932119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:33.156 [2024-12-10 11:25:39.932125] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932131] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:22:33.156 [2024-12-10 11:25:39.932139] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:22:33.156 [2024-12-10 11:25:39.932145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932158] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932167] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:33.156 [2024-12-10 11:25:39.932185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:33.156 [2024-12-10 11:25:39.932191] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932197] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:22:33.156 [2024-12-10 11:25:39.932204] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:33.156 [2024-12-10 11:25:39.932211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932222] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932228] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.156 [2024-12-10 11:25:39.932250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.156 [2024-12-10 11:25:39.932256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:33.156 [2024-12-10 11:25:39.932294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.156 [2024-12-10 11:25:39.932305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.156 [2024-12-10 11:25:39.932311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:33.156 [2024-12-10 11:25:39.932334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.156 [2024-12-10 11:25:39.932345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.156 [2024-12-10 11:25:39.932368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:22:33.156 [2024-12-10 11:25:39.932394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.156 [2024-12-10 11:25:39.932405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.156 [2024-12-10 11:25:39.932412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.156 [2024-12-10 11:25:39.932418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:22:33.156 ===================================================== 00:22:33.156 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:33.156 ===================================================== 00:22:33.156 Controller Capabilities/Features 00:22:33.156 ================================ 00:22:33.156 Vendor ID: 8086 00:22:33.156 Subsystem Vendor ID: 8086 00:22:33.156 Serial Number: SPDK00000000000001 00:22:33.156 Model Number: SPDK bdev Controller 00:22:33.156 Firmware Version: 25.01 00:22:33.156 Recommended Arb Burst: 6 00:22:33.156 IEEE OUI Identifier: e4 d2 5c 00:22:33.156 Multi-path I/O 00:22:33.156 May have multiple subsystem ports: Yes 00:22:33.156 May have multiple controllers: Yes 00:22:33.156 Associated with SR-IOV VF: No 00:22:33.156 Max Data Transfer Size: 131072 00:22:33.156 Max Number of Namespaces: 32 00:22:33.156 Max Number of I/O Queues: 127 00:22:33.156 NVMe Specification Version (VS): 1.3 00:22:33.156 NVMe Specification Version (Identify): 1.3 00:22:33.156 Maximum Queue Entries: 128 00:22:33.156 Contiguous Queues Required: Yes 00:22:33.156 Arbitration Mechanisms Supported 00:22:33.156 Weighted Round Robin: Not Supported 00:22:33.156 Vendor Specific: Not Supported 00:22:33.156 Reset Timeout: 15000 ms 00:22:33.156 Doorbell Stride: 4 bytes 00:22:33.156 NVM Subsystem Reset: Not Supported 00:22:33.156 Command Sets Supported 00:22:33.156 NVM Command Set: Supported 00:22:33.156 Boot Partition: Not Supported 00:22:33.156 Memory Page Size Minimum: 4096 bytes 00:22:33.156 Memory Page Size Maximum: 4096 bytes 00:22:33.156 Persistent Memory Region: Not Supported 00:22:33.156 Optional Asynchronous Events Supported 00:22:33.156 Namespace Attribute Notices: Supported 00:22:33.156 Firmware Activation Notices: Not Supported 00:22:33.156 ANA Change Notices: Not Supported 00:22:33.156 PLE Aggregate Log Change Notices: Not Supported 00:22:33.156 LBA Status Info Alert Notices: Not Supported 00:22:33.156 EGE Aggregate Log Change Notices: Not Supported 00:22:33.156 Normal NVM Subsystem Shutdown event: Not Supported 00:22:33.156 Zone Descriptor Change Notices: Not Supported 00:22:33.156 Discovery Log Change Notices: Not Supported 00:22:33.156 Controller Attributes 00:22:33.156 128-bit Host Identifier: Supported 00:22:33.156 Non-Operational Permissive Mode: Not Supported 00:22:33.157 NVM Sets: Not Supported 00:22:33.157 Read Recovery Levels: Not Supported 00:22:33.157 Endurance Groups: Not Supported 00:22:33.157 Predictable Latency Mode: Not Supported 00:22:33.157 Traffic Based Keep ALive: Not Supported 00:22:33.157 Namespace Granularity: Not Supported 00:22:33.157 SQ Associations: Not Supported 00:22:33.157 UUID List: Not Supported 00:22:33.157 Multi-Domain Subsystem: Not Supported 00:22:33.157 Fixed Capacity Management: Not Supported 00:22:33.157 Variable Capacity Management: Not Supported 00:22:33.157 Delete Endurance Group: Not Supported 00:22:33.157 Delete NVM Set: Not Supported 00:22:33.157 Extended LBA Formats Supported: Not Supported 00:22:33.157 Flexible Data Placement Supported: Not Supported 00:22:33.157 00:22:33.157 Controller Memory Buffer Support 00:22:33.157 ================================ 00:22:33.157 Supported: No 00:22:33.157 00:22:33.157 Persistent Memory Region Support 00:22:33.157 ================================ 00:22:33.157 Supported: No 00:22:33.157 00:22:33.157 Admin Command Set Attributes 00:22:33.157 ============================ 00:22:33.157 Security Send/Receive: Not Supported 00:22:33.157 Format NVM: Not Supported 00:22:33.157 Firmware Activate/Download: Not Supported 00:22:33.157 Namespace Management: Not Supported 00:22:33.157 Device Self-Test: Not Supported 00:22:33.157 Directives: Not Supported 00:22:33.157 NVMe-MI: Not Supported 00:22:33.157 Virtualization Management: Not Supported 00:22:33.157 Doorbell Buffer Config: Not Supported 00:22:33.157 Get LBA Status Capability: Not Supported 00:22:33.157 Command & Feature Lockdown Capability: Not Supported 00:22:33.157 Abort Command Limit: 4 00:22:33.157 Async Event Request Limit: 4 00:22:33.157 Number of Firmware Slots: N/A 00:22:33.157 Firmware Slot 1 Read-Only: N/A 00:22:33.157 Firmware Activation Without Reset: N/A 00:22:33.157 Multiple Update Detection Support: N/A 00:22:33.157 Firmware Update Granularity: No Information Provided 00:22:33.157 Per-Namespace SMART Log: No 00:22:33.157 Asymmetric Namespace Access Log Page: Not Supported 00:22:33.157 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:33.157 Command Effects Log Page: Supported 00:22:33.157 Get Log Page Extended Data: Supported 00:22:33.157 Telemetry Log Pages: Not Supported 00:22:33.157 Persistent Event Log Pages: Not Supported 00:22:33.157 Supported Log Pages Log Page: May Support 00:22:33.157 Commands Supported & Effects Log Page: Not Supported 00:22:33.157 Feature Identifiers & Effects Log Page:May Support 00:22:33.157 NVMe-MI Commands & Effects Log Page: May Support 00:22:33.157 Data Area 4 for Telemetry Log: Not Supported 00:22:33.157 Error Log Page Entries Supported: 128 00:22:33.157 Keep Alive: Supported 00:22:33.157 Keep Alive Granularity: 10000 ms 00:22:33.157 00:22:33.157 NVM Command Set Attributes 00:22:33.157 ========================== 00:22:33.157 Submission Queue Entry Size 00:22:33.157 Max: 64 00:22:33.157 Min: 64 00:22:33.157 Completion Queue Entry Size 00:22:33.157 Max: 16 00:22:33.157 Min: 16 00:22:33.157 Number of Namespaces: 32 00:22:33.157 Compare Command: Supported 00:22:33.157 Write Uncorrectable Command: Not Supported 00:22:33.157 Dataset Management Command: Supported 00:22:33.157 Write Zeroes Command: Supported 00:22:33.157 Set Features Save Field: Not Supported 00:22:33.157 Reservations: Supported 00:22:33.157 Timestamp: Not Supported 00:22:33.157 Copy: Supported 00:22:33.157 Volatile Write Cache: Present 00:22:33.157 Atomic Write Unit (Normal): 1 00:22:33.157 Atomic Write Unit (PFail): 1 00:22:33.157 Atomic Compare & Write Unit: 1 00:22:33.157 Fused Compare & Write: Supported 00:22:33.157 Scatter-Gather List 00:22:33.157 SGL Command Set: Supported 00:22:33.157 SGL Keyed: Supported 00:22:33.157 SGL Bit Bucket Descriptor: Not Supported 00:22:33.157 SGL Metadata Pointer: Not Supported 00:22:33.157 Oversized SGL: Not Supported 00:22:33.157 SGL Metadata Address: Not Supported 00:22:33.157 SGL Offset: Supported 00:22:33.157 Transport SGL Data Block: Not Supported 00:22:33.157 Replay Protected Memory Block: Not Supported 00:22:33.157 00:22:33.157 Firmware Slot Information 00:22:33.157 ========================= 00:22:33.157 Active slot: 1 00:22:33.157 Slot 1 Firmware Revision: 25.01 00:22:33.157 00:22:33.157 00:22:33.157 Commands Supported and Effects 00:22:33.157 ============================== 00:22:33.157 Admin Commands 00:22:33.157 -------------- 00:22:33.157 Get Log Page (02h): Supported 00:22:33.157 Identify (06h): Supported 00:22:33.157 Abort (08h): Supported 00:22:33.157 Set Features (09h): Supported 00:22:33.157 Get Features (0Ah): Supported 00:22:33.157 Asynchronous Event Request (0Ch): Supported 00:22:33.157 Keep Alive (18h): Supported 00:22:33.157 I/O Commands 00:22:33.157 ------------ 00:22:33.157 Flush (00h): Supported LBA-Change 00:22:33.157 Write (01h): Supported LBA-Change 00:22:33.157 Read (02h): Supported 00:22:33.157 Compare (05h): Supported 00:22:33.157 Write Zeroes (08h): Supported LBA-Change 00:22:33.157 Dataset Management (09h): Supported LBA-Change 00:22:33.157 Copy (19h): Supported LBA-Change 00:22:33.157 00:22:33.157 Error Log 00:22:33.157 ========= 00:22:33.157 00:22:33.157 Arbitration 00:22:33.157 =========== 00:22:33.157 Arbitration Burst: 1 00:22:33.157 00:22:33.157 Power Management 00:22:33.157 ================ 00:22:33.157 Number of Power States: 1 00:22:33.157 Current Power State: Power State #0 00:22:33.157 Power State #0: 00:22:33.157 Max Power: 0.00 W 00:22:33.157 Non-Operational State: Operational 00:22:33.157 Entry Latency: Not Reported 00:22:33.157 Exit Latency: Not Reported 00:22:33.157 Relative Read Throughput: 0 00:22:33.157 Relative Read Latency: 0 00:22:33.157 Relative Write Throughput: 0 00:22:33.157 Relative Write Latency: 0 00:22:33.157 Idle Power: Not Reported 00:22:33.157 Active Power: Not Reported 00:22:33.157 Non-Operational Permissive Mode: Not Supported 00:22:33.157 00:22:33.157 Health Information 00:22:33.157 ================== 00:22:33.157 Critical Warnings: 00:22:33.157 Available Spare Space: OK 00:22:33.157 Temperature: OK 00:22:33.157 Device Reliability: OK 00:22:33.157 Read Only: No 00:22:33.157 Volatile Memory Backup: OK 00:22:33.157 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:33.157 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:33.157 Available Spare: 0% 00:22:33.157 Available Spare Threshold: 0% 00:22:33.157 Life Percentage Used:[2024-12-10 11:25:39.932597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.157 [2024-12-10 11:25:39.932611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:22:33.157 [2024-12-10 11:25:39.932626] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.157 [2024-12-10 11:25:39.932662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:22:33.157 [2024-12-10 11:25:39.932736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.157 [2024-12-10 11:25:39.932749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.157 [2024-12-10 11:25:39.932756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.157 [2024-12-10 11:25:39.932767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:22:33.157 [2024-12-10 11:25:39.932854] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:33.157 [2024-12-10 11:25:39.932894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.157 [2024-12-10 11:25:39.932909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.157 [2024-12-10 11:25:39.932919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:22:33.157 [2024-12-10 11:25:39.932928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.158 [2024-12-10 11:25:39.932941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:22:33.158 [2024-12-10 11:25:39.932951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.158 [2024-12-10 11:25:39.932959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.158 [2024-12-10 11:25:39.932968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.158 [2024-12-10 11:25:39.932983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.932992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.932999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.158 [2024-12-10 11:25:39.933014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.158 [2024-12-10 11:25:39.933050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.158 [2024-12-10 11:25:39.933119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.158 [2024-12-10 11:25:39.933132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.158 [2024-12-10 11:25:39.933143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.933151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.158 [2024-12-10 11:25:39.933166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.933175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.933182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.158 [2024-12-10 11:25:39.933196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.158 [2024-12-10 11:25:39.933228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.158 [2024-12-10 11:25:39.933364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.158 [2024-12-10 11:25:39.933389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.158 [2024-12-10 11:25:39.933397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.933405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.158 [2024-12-10 11:25:39.933417] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:33.158 [2024-12-10 11:25:39.933426] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:33.158 [2024-12-10 11:25:39.933448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.933457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.933465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.158 [2024-12-10 11:25:39.933478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.158 [2024-12-10 11:25:39.933509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.158 [2024-12-10 11:25:39.933584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.158 [2024-12-10 11:25:39.933602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.158 [2024-12-10 11:25:39.933609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.933616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.158 [2024-12-10 11:25:39.933635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.933643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.933650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.158 [2024-12-10 11:25:39.933663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.158 [2024-12-10 11:25:39.933690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.158 [2024-12-10 11:25:39.933755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.158 [2024-12-10 11:25:39.933771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.158 [2024-12-10 11:25:39.933779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.933786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.158 [2024-12-10 11:25:39.933803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.933812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.933818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.158 [2024-12-10 11:25:39.933831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.158 [2024-12-10 11:25:39.933862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.158 [2024-12-10 11:25:39.933915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.158 [2024-12-10 11:25:39.933931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.158 [2024-12-10 11:25:39.933938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.933945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.158 [2024-12-10 11:25:39.933963] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.933971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.933978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.158 [2024-12-10 11:25:39.933996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.158 [2024-12-10 11:25:39.934024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.158 [2024-12-10 11:25:39.934091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.158 [2024-12-10 11:25:39.934103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.158 [2024-12-10 11:25:39.934110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.934116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.158 [2024-12-10 11:25:39.934139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.934148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.934154] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.158 [2024-12-10 11:25:39.934167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.158 [2024-12-10 11:25:39.934193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.158 [2024-12-10 11:25:39.934260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.158 [2024-12-10 11:25:39.934276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.158 [2024-12-10 11:25:39.934286] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.934294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.158 [2024-12-10 11:25:39.934312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.934320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.934327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.158 [2024-12-10 11:25:39.934339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.158 [2024-12-10 11:25:39.938407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.158 [2024-12-10 11:25:39.938483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.158 [2024-12-10 11:25:39.938503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.158 [2024-12-10 11:25:39.938510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.158 [2024-12-10 11:25:39.938518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.158 [2024-12-10 11:25:39.938534] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:22:33.417 0% 00:22:33.417 Data Units Read: 0 00:22:33.417 Data Units Written: 0 00:22:33.417 Host Read Commands: 0 00:22:33.417 Host Write Commands: 0 00:22:33.417 Controller Busy Time: 0 minutes 00:22:33.417 Power Cycles: 0 00:22:33.417 Power On Hours: 0 hours 00:22:33.417 Unsafe Shutdowns: 0 00:22:33.417 Unrecoverable Media Errors: 0 00:22:33.417 Lifetime Error Log Entries: 0 00:22:33.417 Warning Temperature Time: 0 minutes 00:22:33.417 Critical Temperature Time: 0 minutes 00:22:33.417 00:22:33.417 Number of Queues 00:22:33.417 ================ 00:22:33.417 Number of I/O Submission Queues: 127 00:22:33.417 Number of I/O Completion Queues: 127 00:22:33.417 00:22:33.417 Active Namespaces 00:22:33.417 ================= 00:22:33.417 Namespace ID:1 00:22:33.417 Error Recovery Timeout: Unlimited 00:22:33.417 Command Set Identifier: NVM (00h) 00:22:33.417 Deallocate: Supported 00:22:33.417 Deallocated/Unwritten Error: Not Supported 00:22:33.417 Deallocated Read Value: Unknown 00:22:33.417 Deallocate in Write Zeroes: Not Supported 00:22:33.417 Deallocated Guard Field: 0xFFFF 00:22:33.417 Flush: Supported 00:22:33.417 Reservation: Supported 00:22:33.417 Namespace Sharing Capabilities: Multiple Controllers 00:22:33.417 Size (in LBAs): 131072 (0GiB) 00:22:33.417 Capacity (in LBAs): 131072 (0GiB) 00:22:33.417 Utilization (in LBAs): 131072 (0GiB) 00:22:33.417 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:33.417 EUI64: ABCDEF0123456789 00:22:33.417 UUID: 809aed85-baa1-4ace-806c-a26cf094a095 00:22:33.417 Thin Provisioning: Not Supported 00:22:33.417 Per-NS Atomic Units: Yes 00:22:33.417 Atomic Boundary Size (Normal): 0 00:22:33.417 Atomic Boundary Size (PFail): 0 00:22:33.417 Atomic Boundary Offset: 0 00:22:33.417 Maximum Single Source Range Length: 65535 00:22:33.417 Maximum Copy Length: 65535 00:22:33.417 Maximum Source Range Count: 1 00:22:33.417 NGUID/EUI64 Never Reused: No 00:22:33.417 Namespace Write Protected: No 00:22:33.417 Number of LBA Formats: 1 00:22:33.417 Current LBA Format: LBA Format #00 00:22:33.417 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:33.417 00:22:33.417 11:25:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.417 rmmod nvme_tcp 00:22:33.417 rmmod nvme_fabrics 00:22:33.417 rmmod nvme_keyring 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 80317 ']' 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 80317 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 80317 ']' 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 80317 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80317 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:33.417 killing process with pid 80317 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80317' 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 80317 00:22:33.417 11:25:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 80317 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:22:34.794 00:22:34.794 real 0m4.147s 00:22:34.794 user 0m11.177s 00:22:34.794 sys 0m0.934s 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.794 ************************************ 00:22:34.794 END TEST nvmf_identify 00:22:34.794 ************************************ 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.794 ************************************ 00:22:34.794 START TEST nvmf_perf 00:22:34.794 ************************************ 00:22:34.794 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:35.055 * Looking for test storage... 00:22:35.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:35.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.055 --rc genhtml_branch_coverage=1 00:22:35.055 --rc genhtml_function_coverage=1 00:22:35.055 --rc genhtml_legend=1 00:22:35.055 --rc geninfo_all_blocks=1 00:22:35.055 --rc geninfo_unexecuted_blocks=1 00:22:35.055 00:22:35.055 ' 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:35.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.055 --rc genhtml_branch_coverage=1 00:22:35.055 --rc genhtml_function_coverage=1 00:22:35.055 --rc genhtml_legend=1 00:22:35.055 --rc geninfo_all_blocks=1 00:22:35.055 --rc geninfo_unexecuted_blocks=1 00:22:35.055 00:22:35.055 ' 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:35.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.055 --rc genhtml_branch_coverage=1 00:22:35.055 --rc genhtml_function_coverage=1 00:22:35.055 --rc genhtml_legend=1 00:22:35.055 --rc geninfo_all_blocks=1 00:22:35.055 --rc geninfo_unexecuted_blocks=1 00:22:35.055 00:22:35.055 ' 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:35.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.055 --rc genhtml_branch_coverage=1 00:22:35.055 --rc genhtml_function_coverage=1 00:22:35.055 --rc genhtml_legend=1 00:22:35.055 --rc geninfo_all_blocks=1 00:22:35.055 --rc geninfo_unexecuted_blocks=1 00:22:35.055 00:22:35.055 ' 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.056 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:35.056 Cannot find device "nvmf_init_br" 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:35.056 Cannot find device "nvmf_init_br2" 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:35.056 Cannot find device "nvmf_tgt_br" 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:35.056 Cannot find device "nvmf_tgt_br2" 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:35.056 Cannot find device "nvmf_init_br" 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:22:35.056 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:35.317 Cannot find device "nvmf_init_br2" 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:35.317 Cannot find device "nvmf_tgt_br" 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:35.317 Cannot find device "nvmf_tgt_br2" 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:35.317 Cannot find device "nvmf_br" 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:35.317 Cannot find device "nvmf_init_if" 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:35.317 Cannot find device "nvmf_init_if2" 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:35.317 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:35.317 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:35.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:35.318 11:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:35.318 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:35.577 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:35.577 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:22:35.577 00:22:35.577 --- 10.0.0.3 ping statistics --- 00:22:35.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.577 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:35.577 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:35.577 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:22:35.577 00:22:35.577 --- 10.0.0.4 ping statistics --- 00:22:35.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.577 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:35.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:22:35.577 00:22:35.577 --- 10.0.0.1 ping statistics --- 00:22:35.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.577 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:35.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:22:35.577 00:22:35.577 --- 10.0.0.2 ping statistics --- 00:22:35.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.577 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=80589 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 80589 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 80589 ']' 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.577 11:25:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:35.577 [2024-12-10 11:25:42.314580] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:22:35.577 [2024-12-10 11:25:42.314720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.836 [2024-12-10 11:25:42.493007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.836 [2024-12-10 11:25:42.626570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.836 [2024-12-10 11:25:42.626649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.836 [2024-12-10 11:25:42.626684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.836 [2024-12-10 11:25:42.626700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.836 [2024-12-10 11:25:42.626725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.836 [2024-12-10 11:25:42.628938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.836 [2024-12-10 11:25:42.629055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.836 [2024-12-10 11:25:42.629227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.836 [2024-12-10 11:25:42.629664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.094 [2024-12-10 11:25:42.838139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:36.661 11:25:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.661 11:25:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:36.661 11:25:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:36.661 11:25:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:36.661 11:25:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:36.661 11:25:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.661 11:25:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:36.661 11:25:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:22:37.241 11:25:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:22:37.241 11:25:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:37.499 11:25:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:22:37.499 11:25:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:37.758 11:25:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:37.758 11:25:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:22:37.758 11:25:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:37.758 11:25:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:37.758 11:25:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:38.016 [2024-12-10 11:25:44.746047] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.016 11:25:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:38.582 11:25:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:38.582 11:25:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:38.582 11:25:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:38.582 11:25:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:39.148 11:25:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:39.148 [2024-12-10 11:25:45.971837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:39.407 11:25:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:22:39.665 11:25:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:39.665 11:25:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:39.666 11:25:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:39.666 11:25:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:41.043 Initializing NVMe Controllers 00:22:41.043 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:41.043 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:22:41.043 Initialization complete. Launching workers. 00:22:41.043 ======================================================== 00:22:41.043 Latency(us) 00:22:41.043 Device Information : IOPS MiB/s Average min max 00:22:41.043 PCIE (0000:00:10.0) NSID 1 from core 0: 22783.98 89.00 1403.67 347.32 8990.97 00:22:41.043 ======================================================== 00:22:41.043 Total : 22783.98 89.00 1403.67 347.32 8990.97 00:22:41.043 00:22:41.043 11:25:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:42.420 Initializing NVMe Controllers 00:22:42.420 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:42.420 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:42.420 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:42.420 Initialization complete. Launching workers. 00:22:42.420 ======================================================== 00:22:42.420 Latency(us) 00:22:42.420 Device Information : IOPS MiB/s Average min max 00:22:42.420 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2700.50 10.55 369.78 137.62 7362.60 00:22:42.420 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.89 0.48 8135.99 6047.95 12012.89 00:22:42.420 ======================================================== 00:22:42.420 Total : 2824.38 11.03 710.42 137.62 12012.89 00:22:42.420 00:22:42.420 11:25:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:43.795 Initializing NVMe Controllers 00:22:43.795 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:43.795 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:43.795 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:43.795 Initialization complete. Launching workers. 00:22:43.795 ======================================================== 00:22:43.795 Latency(us) 00:22:43.795 Device Information : IOPS MiB/s Average min max 00:22:43.795 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6789.75 26.52 4714.82 1147.30 8389.06 00:22:43.795 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4017.30 15.69 7998.19 6054.44 9636.90 00:22:43.795 ======================================================== 00:22:43.795 Total : 10807.06 42.22 5935.34 1147.30 9636.90 00:22:43.795 00:22:43.795 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:22:43.795 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:47.083 Initializing NVMe Controllers 00:22:47.083 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:47.083 Controller IO queue size 128, less than required. 00:22:47.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.083 Controller IO queue size 128, less than required. 00:22:47.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.083 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:47.083 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:47.083 Initialization complete. Launching workers. 00:22:47.083 ======================================================== 00:22:47.083 Latency(us) 00:22:47.083 Device Information : IOPS MiB/s Average min max 00:22:47.083 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1458.05 364.51 91121.33 46584.89 270896.38 00:22:47.083 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 579.62 144.91 232963.20 96488.94 489176.86 00:22:47.083 ======================================================== 00:22:47.083 Total : 2037.67 509.42 131468.70 46584.89 489176.86 00:22:47.083 00:22:47.083 11:25:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:22:47.083 Initializing NVMe Controllers 00:22:47.083 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:47.083 Controller IO queue size 128, less than required. 00:22:47.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.083 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:47.083 Controller IO queue size 128, less than required. 00:22:47.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.083 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:22:47.083 WARNING: Some requested NVMe devices were skipped 00:22:47.083 No valid NVMe controllers or AIO or URING devices found 00:22:47.083 11:25:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:22:50.371 Initializing NVMe Controllers 00:22:50.371 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.371 Controller IO queue size 128, less than required. 00:22:50.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:50.371 Controller IO queue size 128, less than required. 00:22:50.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:50.371 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:50.371 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:50.371 Initialization complete. Launching workers. 00:22:50.371 00:22:50.371 ==================== 00:22:50.371 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:50.371 TCP transport: 00:22:50.371 polls: 6174 00:22:50.371 idle_polls: 3165 00:22:50.371 sock_completions: 3009 00:22:50.371 nvme_completions: 5593 00:22:50.371 submitted_requests: 8464 00:22:50.371 queued_requests: 1 00:22:50.371 00:22:50.371 ==================== 00:22:50.371 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:50.371 TCP transport: 00:22:50.371 polls: 8776 00:22:50.371 idle_polls: 5511 00:22:50.371 sock_completions: 3265 00:22:50.371 nvme_completions: 5749 00:22:50.371 submitted_requests: 8674 00:22:50.371 queued_requests: 1 00:22:50.371 ======================================================== 00:22:50.371 Latency(us) 00:22:50.371 Device Information : IOPS MiB/s Average min max 00:22:50.371 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1397.17 349.29 99792.14 49933.12 422812.28 00:22:50.371 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1436.15 359.04 90579.73 45251.18 311555.11 00:22:50.371 ======================================================== 00:22:50.371 Total : 2833.32 708.33 95122.57 45251.18 422812.28 00:22:50.371 00:22:50.371 11:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:50.371 11:25:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.630 11:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:22:50.630 11:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:22:50.630 11:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:22:50.888 11:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=edf46451-7af0-456f-a620-0f8ef5fffc51 00:22:50.888 11:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb edf46451-7af0-456f-a620-0f8ef5fffc51 00:22:50.888 11:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=edf46451-7af0-456f-a620-0f8ef5fffc51 00:22:50.888 11:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:22:50.888 11:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:22:50.888 11:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:22:50.888 11:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:51.147 11:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:22:51.147 { 00:22:51.147 "uuid": "edf46451-7af0-456f-a620-0f8ef5fffc51", 00:22:51.147 "name": "lvs_0", 00:22:51.147 "base_bdev": "Nvme0n1", 00:22:51.147 "total_data_clusters": 1278, 00:22:51.147 "free_clusters": 1278, 00:22:51.147 "block_size": 4096, 00:22:51.147 "cluster_size": 4194304 00:22:51.147 } 00:22:51.147 ]' 00:22:51.147 11:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="edf46451-7af0-456f-a620-0f8ef5fffc51") .free_clusters' 00:22:51.405 11:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:22:51.405 11:25:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="edf46451-7af0-456f-a620-0f8ef5fffc51") .cluster_size' 00:22:51.405 5112 00:22:51.405 11:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:22:51.405 11:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:22:51.405 11:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:22:51.405 11:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:22:51.405 11:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u edf46451-7af0-456f-a620-0f8ef5fffc51 lbd_0 5112 00:22:51.663 11:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=31d70b8e-d053-47c0-a596-9644d1d7d85f 00:22:51.663 11:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 31d70b8e-d053-47c0-a596-9644d1d7d85f lvs_n_0 00:22:52.229 11:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=cab81719-3bb3-4ca9-bcfc-3601f169d99d 00:22:52.229 11:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb cab81719-3bb3-4ca9-bcfc-3601f169d99d 00:22:52.229 11:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=cab81719-3bb3-4ca9-bcfc-3601f169d99d 00:22:52.229 11:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:22:52.229 11:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:22:52.229 11:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:22:52.229 11:25:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:52.487 11:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:22:52.487 { 00:22:52.487 "uuid": "edf46451-7af0-456f-a620-0f8ef5fffc51", 00:22:52.487 "name": "lvs_0", 00:22:52.487 "base_bdev": "Nvme0n1", 00:22:52.487 "total_data_clusters": 1278, 00:22:52.487 "free_clusters": 0, 00:22:52.487 "block_size": 4096, 00:22:52.487 "cluster_size": 4194304 00:22:52.487 }, 00:22:52.487 { 00:22:52.487 "uuid": "cab81719-3bb3-4ca9-bcfc-3601f169d99d", 00:22:52.487 "name": "lvs_n_0", 00:22:52.487 "base_bdev": "31d70b8e-d053-47c0-a596-9644d1d7d85f", 00:22:52.487 "total_data_clusters": 1276, 00:22:52.487 "free_clusters": 1276, 00:22:52.487 "block_size": 4096, 00:22:52.487 "cluster_size": 4194304 00:22:52.487 } 00:22:52.487 ]' 00:22:52.487 11:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="cab81719-3bb3-4ca9-bcfc-3601f169d99d") .free_clusters' 00:22:52.487 11:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:22:52.487 11:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="cab81719-3bb3-4ca9-bcfc-3601f169d99d") .cluster_size' 00:22:52.487 5104 00:22:52.487 11:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:22:52.487 11:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:22:52.487 11:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:22:52.487 11:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:22:52.487 11:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cab81719-3bb3-4ca9-bcfc-3601f169d99d lbd_nest_0 5104 00:22:52.745 11:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=f2c0156e-239c-42d3-84ee-69fdafdf536f 00:22:52.745 11:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:53.311 11:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:22:53.311 11:25:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 f2c0156e-239c-42d3-84ee-69fdafdf536f 00:22:53.311 11:26:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:53.569 11:26:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:22:53.569 11:26:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:22:53.569 11:26:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:22:53.569 11:26:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:53.569 11:26:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:54.136 Initializing NVMe Controllers 00:22:54.136 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:54.136 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:22:54.136 WARNING: Some requested NVMe devices were skipped 00:22:54.136 No valid NVMe controllers or AIO or URING devices found 00:22:54.136 11:26:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:54.136 11:26:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:06.338 Initializing NVMe Controllers 00:23:06.338 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.338 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:06.338 Initialization complete. Launching workers. 00:23:06.338 ======================================================== 00:23:06.338 Latency(us) 00:23:06.338 Device Information : IOPS MiB/s Average min max 00:23:06.338 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 851.10 106.39 1174.29 406.33 8079.30 00:23:06.338 ======================================================== 00:23:06.338 Total : 851.10 106.39 1174.29 406.33 8079.30 00:23:06.338 00:23:06.338 11:26:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:06.338 11:26:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:06.338 11:26:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:06.338 Initializing NVMe Controllers 00:23:06.338 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.339 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:06.339 WARNING: Some requested NVMe devices were skipped 00:23:06.339 No valid NVMe controllers or AIO or URING devices found 00:23:06.339 11:26:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:06.339 11:26:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:16.374 Initializing NVMe Controllers 00:23:16.374 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:16.374 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:16.374 Initialization complete. Launching workers. 00:23:16.374 ======================================================== 00:23:16.374 Latency(us) 00:23:16.374 Device Information : IOPS MiB/s Average min max 00:23:16.374 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1325.30 165.66 24183.69 5184.13 67025.67 00:23:16.374 ======================================================== 00:23:16.374 Total : 1325.30 165.66 24183.69 5184.13 67025.67 00:23:16.374 00:23:16.374 11:26:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:16.374 11:26:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:16.374 11:26:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:16.374 Initializing NVMe Controllers 00:23:16.374 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:16.374 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:16.374 WARNING: Some requested NVMe devices were skipped 00:23:16.374 No valid NVMe controllers or AIO or URING devices found 00:23:16.374 11:26:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:16.374 11:26:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:26.349 Initializing NVMe Controllers 00:23:26.349 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:26.349 Controller IO queue size 128, less than required. 00:23:26.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:26.350 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:26.350 Initialization complete. Launching workers. 00:23:26.350 ======================================================== 00:23:26.350 Latency(us) 00:23:26.350 Device Information : IOPS MiB/s Average min max 00:23:26.350 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3439.35 429.92 37296.92 13204.94 93935.93 00:23:26.350 ======================================================== 00:23:26.350 Total : 3439.35 429.92 37296.92 13204.94 93935.93 00:23:26.350 00:23:26.350 11:26:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.917 11:26:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f2c0156e-239c-42d3-84ee-69fdafdf536f 00:23:27.176 11:26:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:23:27.434 11:26:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 31d70b8e-d053-47c0-a596-9644d1d7d85f 00:23:28.000 11:26:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:23:28.258 11:26:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:28.258 11:26:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:28.258 11:26:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:28.258 11:26:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:28.258 11:26:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.258 11:26:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:28.258 11:26:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.258 11:26:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.258 rmmod nvme_tcp 00:23:28.258 rmmod nvme_fabrics 00:23:28.258 rmmod nvme_keyring 00:23:28.258 11:26:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.258 11:26:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:28.258 11:26:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:28.258 11:26:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 80589 ']' 00:23:28.258 11:26:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 80589 00:23:28.258 11:26:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 80589 ']' 00:23:28.258 11:26:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 80589 00:23:28.258 11:26:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:28.258 11:26:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.258 11:26:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80589 00:23:28.258 11:26:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:28.258 11:26:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:28.258 11:26:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80589' 00:23:28.258 killing process with pid 80589 00:23:28.258 11:26:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 80589 00:23:28.258 11:26:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 80589 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.787 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:23:31.046 00:23:31.046 real 0m56.037s 00:23:31.046 user 3m32.170s 00:23:31.046 sys 0m13.058s 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:31.046 ************************************ 00:23:31.046 END TEST nvmf_perf 00:23:31.046 ************************************ 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.046 ************************************ 00:23:31.046 START TEST nvmf_fio_host 00:23:31.046 ************************************ 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:31.046 * Looking for test storage... 00:23:31.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:31.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.046 --rc genhtml_branch_coverage=1 00:23:31.046 --rc genhtml_function_coverage=1 00:23:31.046 --rc genhtml_legend=1 00:23:31.046 --rc geninfo_all_blocks=1 00:23:31.046 --rc geninfo_unexecuted_blocks=1 00:23:31.046 00:23:31.046 ' 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:31.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.046 --rc genhtml_branch_coverage=1 00:23:31.046 --rc genhtml_function_coverage=1 00:23:31.046 --rc genhtml_legend=1 00:23:31.046 --rc geninfo_all_blocks=1 00:23:31.046 --rc geninfo_unexecuted_blocks=1 00:23:31.046 00:23:31.046 ' 00:23:31.046 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:31.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.046 --rc genhtml_branch_coverage=1 00:23:31.047 --rc genhtml_function_coverage=1 00:23:31.047 --rc genhtml_legend=1 00:23:31.047 --rc geninfo_all_blocks=1 00:23:31.047 --rc geninfo_unexecuted_blocks=1 00:23:31.047 00:23:31.047 ' 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:31.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.047 --rc genhtml_branch_coverage=1 00:23:31.047 --rc genhtml_function_coverage=1 00:23:31.047 --rc genhtml_legend=1 00:23:31.047 --rc geninfo_all_blocks=1 00:23:31.047 --rc geninfo_unexecuted_blocks=1 00:23:31.047 00:23:31.047 ' 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.047 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.306 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.307 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:31.307 Cannot find device "nvmf_init_br" 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:31.307 Cannot find device "nvmf_init_br2" 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:31.307 Cannot find device "nvmf_tgt_br" 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:31.307 Cannot find device "nvmf_tgt_br2" 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:31.307 Cannot find device "nvmf_init_br" 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:31.307 Cannot find device "nvmf_init_br2" 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:31.307 Cannot find device "nvmf_tgt_br" 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:31.307 Cannot find device "nvmf_tgt_br2" 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:31.307 Cannot find device "nvmf_br" 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:31.307 Cannot find device "nvmf_init_if" 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:31.307 Cannot find device "nvmf_init_if2" 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:23:31.307 11:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:31.307 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:31.307 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:23:31.307 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:31.307 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:31.307 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:23:31.307 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:31.307 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:31.307 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:31.307 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:31.307 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:31.307 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:31.307 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:31.307 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:31.307 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:31.307 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:31.565 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:31.565 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:31.565 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:31.565 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:31.565 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:31.565 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:31.565 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:31.565 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:31.565 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:31.565 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:31.566 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:31.566 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:23:31.566 00:23:31.566 --- 10.0.0.3 ping statistics --- 00:23:31.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.566 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:31.566 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:31.566 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:23:31.566 00:23:31.566 --- 10.0.0.4 ping statistics --- 00:23:31.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.566 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:31.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:23:31.566 00:23:31.566 --- 10.0.0.1 ping statistics --- 00:23:31.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.566 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:31.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:23:31.566 00:23:31.566 --- 10.0.0.2 ping statistics --- 00:23:31.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.566 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=81501 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 81501 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 81501 ']' 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.566 11:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.824 [2024-12-10 11:26:38.463805] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:31.824 [2024-12-10 11:26:38.463975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.082 [2024-12-10 11:26:38.659295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:32.082 [2024-12-10 11:26:38.809157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.082 [2024-12-10 11:26:38.809249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.082 [2024-12-10 11:26:38.809280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.082 [2024-12-10 11:26:38.809300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.082 [2024-12-10 11:26:38.809325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.082 [2024-12-10 11:26:38.811721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.082 [2024-12-10 11:26:38.811805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.082 [2024-12-10 11:26:38.811867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.082 [2024-12-10 11:26:38.811880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.339 [2024-12-10 11:26:39.059610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:32.907 11:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.907 11:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:32.907 11:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:33.169 [2024-12-10 11:26:39.808082] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.169 11:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:33.169 11:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.169 11:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.169 11:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:33.435 Malloc1 00:23:33.435 11:26:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:34.003 11:26:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:34.003 11:26:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:34.570 [2024-12-10 11:26:41.114487] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:34.570 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:34.828 11:26:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:23:34.828 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:34.828 fio-3.35 00:23:34.828 Starting 1 thread 00:23:37.356 00:23:37.356 test: (groupid=0, jobs=1): err= 0: pid=81582: Tue Dec 10 11:26:43 2024 00:23:37.356 read: IOPS=6887, BW=26.9MiB/s (28.2MB/s)(54.0MiB/2008msec) 00:23:37.356 slat (usec): min=2, max=195, avg= 3.21, stdev= 2.16 00:23:37.356 clat (usec): min=2008, max=17571, avg=9643.25, stdev=679.97 00:23:37.356 lat (usec): min=2037, max=17574, avg=9646.46, stdev=679.77 00:23:37.356 clat percentiles (usec): 00:23:37.356 | 1.00th=[ 8291], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9241], 00:23:37.356 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9765], 00:23:37.356 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10421], 95.00th=[10552], 00:23:37.356 | 99.00th=[11076], 99.50th=[11338], 99.90th=[15401], 99.95th=[16450], 00:23:37.356 | 99.99th=[17433] 00:23:37.356 bw ( KiB/s): min=26520, max=28072, per=99.96%, avg=27538.00, stdev=699.85, samples=4 00:23:37.356 iops : min= 6630, max= 7018, avg=6884.50, stdev=174.96, samples=4 00:23:37.356 write: IOPS=6895, BW=26.9MiB/s (28.2MB/s)(54.1MiB/2008msec); 0 zone resets 00:23:37.356 slat (usec): min=2, max=123, avg= 3.32, stdev= 1.47 00:23:37.356 clat (usec): min=1372, max=16370, avg=8807.66, stdev=622.01 00:23:37.356 lat (usec): min=1384, max=16373, avg=8810.98, stdev=621.92 00:23:37.356 clat percentiles (usec): 00:23:37.356 | 1.00th=[ 7570], 5.00th=[ 8029], 10.00th=[ 8160], 20.00th=[ 8455], 00:23:37.356 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:23:37.356 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9503], 95.00th=[ 9634], 00:23:37.356 | 99.00th=[10028], 99.50th=[10421], 99.90th=[15008], 99.95th=[15533], 00:23:37.356 | 99.99th=[16188] 00:23:37.356 bw ( KiB/s): min=27328, max=27808, per=99.92%, avg=27562.00, stdev=198.27, samples=4 00:23:37.356 iops : min= 6832, max= 6952, avg=6890.50, stdev=49.57, samples=4 00:23:37.356 lat (msec) : 2=0.01%, 4=0.10%, 10=86.74%, 20=13.16% 00:23:37.356 cpu : usr=69.81%, sys=23.47%, ctx=30, majf=0, minf=1554 00:23:37.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:37.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:37.356 issued rwts: total=13830,13847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:37.356 00:23:37.356 Run status group 0 (all jobs): 00:23:37.356 READ: bw=26.9MiB/s (28.2MB/s), 26.9MiB/s-26.9MiB/s (28.2MB/s-28.2MB/s), io=54.0MiB (56.6MB), run=2008-2008msec 00:23:37.356 WRITE: bw=26.9MiB/s (28.2MB/s), 26.9MiB/s-26.9MiB/s (28.2MB/s-28.2MB/s), io=54.1MiB (56.7MB), run=2008-2008msec 00:23:37.615 ----------------------------------------------------- 00:23:37.615 Suppressions used: 00:23:37.615 count bytes template 00:23:37.615 1 57 /usr/src/fio/parse.c 00:23:37.615 1 8 libtcmalloc_minimal.so 00:23:37.615 ----------------------------------------------------- 00:23:37.615 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:37.615 11:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:23:37.615 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:37.615 fio-3.35 00:23:37.615 Starting 1 thread 00:23:40.146 00:23:40.146 test: (groupid=0, jobs=1): err= 0: pid=81624: Tue Dec 10 11:26:46 2024 00:23:40.146 read: IOPS=6581, BW=103MiB/s (108MB/s)(206MiB/2007msec) 00:23:40.146 slat (usec): min=3, max=152, avg= 4.94, stdev= 2.43 00:23:40.146 clat (usec): min=2131, max=22269, avg=11059.46, stdev=3322.47 00:23:40.146 lat (usec): min=2135, max=22273, avg=11064.41, stdev=3322.52 00:23:40.146 clat percentiles (usec): 00:23:40.146 | 1.00th=[ 5407], 5.00th=[ 6259], 10.00th=[ 6915], 20.00th=[ 8160], 00:23:40.146 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[11469], 00:23:40.146 | 70.00th=[12649], 80.00th=[13829], 90.00th=[15401], 95.00th=[17433], 00:23:40.146 | 99.00th=[19792], 99.50th=[20317], 99.90th=[21365], 99.95th=[22152], 00:23:40.146 | 99.99th=[22152] 00:23:40.146 bw ( KiB/s): min=49600, max=56960, per=50.00%, avg=52656.00, stdev=3098.83, samples=4 00:23:40.146 iops : min= 3100, max= 3560, avg=3291.00, stdev=193.68, samples=4 00:23:40.146 write: IOPS=3719, BW=58.1MiB/s (60.9MB/s)(108MiB/1858msec); 0 zone resets 00:23:40.146 slat (usec): min=34, max=248, avg=42.75, stdev= 7.37 00:23:40.146 clat (usec): min=4913, max=24387, avg=15088.47, stdev=2720.40 00:23:40.146 lat (usec): min=4951, max=24426, avg=15131.22, stdev=2721.48 00:23:40.146 clat percentiles (usec): 00:23:40.146 | 1.00th=[10028], 5.00th=[11469], 10.00th=[11994], 20.00th=[12780], 00:23:40.146 | 30.00th=[13435], 40.00th=[14091], 50.00th=[14615], 60.00th=[15401], 00:23:40.146 | 70.00th=[16319], 80.00th=[17433], 90.00th=[19006], 95.00th=[20317], 00:23:40.146 | 99.00th=[22152], 99.50th=[22938], 99.90th=[23987], 99.95th=[23987], 00:23:40.146 | 99.99th=[24511] 00:23:40.146 bw ( KiB/s): min=52640, max=58912, per=92.11%, avg=54816.00, stdev=2834.00, samples=4 00:23:40.146 iops : min= 3290, max= 3682, avg=3426.00, stdev=177.13, samples=4 00:23:40.146 lat (msec) : 4=0.18%, 10=27.27%, 20=70.08%, 50=2.47% 00:23:40.146 cpu : usr=81.71%, sys=13.85%, ctx=15, majf=0, minf=2156 00:23:40.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:23:40.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:40.146 issued rwts: total=13209,6911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:40.146 00:23:40.146 Run status group 0 (all jobs): 00:23:40.147 READ: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=206MiB (216MB), run=2007-2007msec 00:23:40.147 WRITE: bw=58.1MiB/s (60.9MB/s), 58.1MiB/s-58.1MiB/s (60.9MB/s-60.9MB/s), io=108MiB (113MB), run=1858-1858msec 00:23:40.405 ----------------------------------------------------- 00:23:40.405 Suppressions used: 00:23:40.405 count bytes template 00:23:40.405 1 57 /usr/src/fio/parse.c 00:23:40.405 347 33312 /usr/src/fio/iolog.c 00:23:40.405 1 8 libtcmalloc_minimal.so 00:23:40.405 ----------------------------------------------------- 00:23:40.405 00:23:40.405 11:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:40.663 11:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:23:40.663 11:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:23:40.663 11:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:23:40.663 11:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:23:40.663 11:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:23:40.663 11:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:40.663 11:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:23:40.663 11:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:40.663 11:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:23:40.663 11:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:40.663 11:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:23:41.228 Nvme0n1 00:23:41.228 11:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:23:41.511 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=d1c56f69-1c6e-4239-b24f-78449387a57f 00:23:41.511 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb d1c56f69-1c6e-4239-b24f-78449387a57f 00:23:41.511 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=d1c56f69-1c6e-4239-b24f-78449387a57f 00:23:41.511 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:23:41.511 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:23:41.511 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:23:41.511 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:41.781 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:23:41.781 { 00:23:41.781 "uuid": "d1c56f69-1c6e-4239-b24f-78449387a57f", 00:23:41.781 "name": "lvs_0", 00:23:41.781 "base_bdev": "Nvme0n1", 00:23:41.781 "total_data_clusters": 4, 00:23:41.781 "free_clusters": 4, 00:23:41.781 "block_size": 4096, 00:23:41.781 "cluster_size": 1073741824 00:23:41.781 } 00:23:41.781 ]' 00:23:41.781 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="d1c56f69-1c6e-4239-b24f-78449387a57f") .free_clusters' 00:23:41.781 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:23:41.781 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="d1c56f69-1c6e-4239-b24f-78449387a57f") .cluster_size' 00:23:41.782 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:23:41.782 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:23:41.782 4096 00:23:41.782 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:23:41.782 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:23:42.040 a730f39d-ff08-4bbd-917d-3f26e85f4b70 00:23:42.040 11:26:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:23:42.606 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:23:42.606 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:43.173 11:26:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:23:43.173 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:43.173 fio-3.35 00:23:43.173 Starting 1 thread 00:23:45.703 00:23:45.703 test: (groupid=0, jobs=1): err= 0: pid=81727: Tue Dec 10 11:26:52 2024 00:23:45.703 read: IOPS=5016, BW=19.6MiB/s (20.5MB/s)(39.4MiB/2011msec) 00:23:45.703 slat (usec): min=2, max=198, avg= 3.68, stdev= 2.90 00:23:45.703 clat (usec): min=3390, max=22068, avg=13278.55, stdev=1132.49 00:23:45.703 lat (usec): min=3396, max=22071, avg=13282.23, stdev=1132.25 00:23:45.703 clat percentiles (usec): 00:23:45.703 | 1.00th=[10945], 5.00th=[11731], 10.00th=[11994], 20.00th=[12387], 00:23:45.703 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13304], 60.00th=[13566], 00:23:45.703 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14615], 95.00th=[15008], 00:23:45.703 | 99.00th=[15926], 99.50th=[16581], 99.90th=[20317], 99.95th=[21627], 00:23:45.703 | 99.99th=[21890] 00:23:45.703 bw ( KiB/s): min=19160, max=20392, per=99.91%, avg=20050.00, stdev=594.28, samples=4 00:23:45.703 iops : min= 4790, max= 5098, avg=5012.50, stdev=148.57, samples=4 00:23:45.703 write: IOPS=5011, BW=19.6MiB/s (20.5MB/s)(39.4MiB/2011msec); 0 zone resets 00:23:45.703 slat (usec): min=2, max=191, avg= 3.87, stdev= 2.75 00:23:45.703 clat (usec): min=2172, max=23504, avg=12076.52, stdev=1079.65 00:23:45.703 lat (usec): min=2183, max=23507, avg=12080.39, stdev=1079.61 00:23:45.703 clat percentiles (usec): 00:23:45.703 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10945], 20.00th=[11207], 00:23:45.703 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:23:45.703 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13304], 95.00th=[13698], 00:23:45.703 | 99.00th=[14484], 99.50th=[15139], 99.90th=[19792], 99.95th=[21365], 00:23:45.703 | 99.99th=[21890] 00:23:45.703 bw ( KiB/s): min=19864, max=20312, per=99.92%, avg=20030.00, stdev=202.53, samples=4 00:23:45.703 iops : min= 4966, max= 5078, avg=5007.50, stdev=50.63, samples=4 00:23:45.703 lat (msec) : 4=0.05%, 10=0.68%, 20=99.15%, 50=0.11% 00:23:45.703 cpu : usr=73.88%, sys=20.05%, ctx=3, majf=0, minf=1554 00:23:45.703 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:23:45.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:45.703 issued rwts: total=10089,10078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.703 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:45.703 00:23:45.703 Run status group 0 (all jobs): 00:23:45.703 READ: bw=19.6MiB/s (20.5MB/s), 19.6MiB/s-19.6MiB/s (20.5MB/s-20.5MB/s), io=39.4MiB (41.3MB), run=2011-2011msec 00:23:45.703 WRITE: bw=19.6MiB/s (20.5MB/s), 19.6MiB/s-19.6MiB/s (20.5MB/s-20.5MB/s), io=39.4MiB (41.3MB), run=2011-2011msec 00:23:45.962 ----------------------------------------------------- 00:23:45.962 Suppressions used: 00:23:45.962 count bytes template 00:23:45.962 1 58 /usr/src/fio/parse.c 00:23:45.962 1 8 libtcmalloc_minimal.so 00:23:45.962 ----------------------------------------------------- 00:23:45.962 00:23:45.962 11:26:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:46.221 11:26:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:23:46.480 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=14391e04-d7f1-4579-9300-933051e87db4 00:23:46.480 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 14391e04-d7f1-4579-9300-933051e87db4 00:23:46.480 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=14391e04-d7f1-4579-9300-933051e87db4 00:23:46.480 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:23:46.480 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:23:46.480 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:23:46.480 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:46.739 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:23:46.739 { 00:23:46.739 "uuid": "d1c56f69-1c6e-4239-b24f-78449387a57f", 00:23:46.739 "name": "lvs_0", 00:23:46.739 "base_bdev": "Nvme0n1", 00:23:46.739 "total_data_clusters": 4, 00:23:46.739 "free_clusters": 0, 00:23:46.739 "block_size": 4096, 00:23:46.739 "cluster_size": 1073741824 00:23:46.739 }, 00:23:46.739 { 00:23:46.739 "uuid": "14391e04-d7f1-4579-9300-933051e87db4", 00:23:46.739 "name": "lvs_n_0", 00:23:46.739 "base_bdev": "a730f39d-ff08-4bbd-917d-3f26e85f4b70", 00:23:46.739 "total_data_clusters": 1022, 00:23:46.739 "free_clusters": 1022, 00:23:46.739 "block_size": 4096, 00:23:46.739 "cluster_size": 4194304 00:23:46.739 } 00:23:46.739 ]' 00:23:46.739 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="14391e04-d7f1-4579-9300-933051e87db4") .free_clusters' 00:23:46.739 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:23:46.739 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="14391e04-d7f1-4579-9300-933051e87db4") .cluster_size' 00:23:46.997 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:23:46.997 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:23:46.997 4088 00:23:46.997 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:23:46.997 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:23:47.256 957a62a8-6d2c-4c60-a57f-1483965f599d 00:23:47.256 11:26:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:23:47.514 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:23:47.773 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:48.032 11:26:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:23:48.291 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:48.291 fio-3.35 00:23:48.291 Starting 1 thread 00:23:50.836 00:23:50.836 test: (groupid=0, jobs=1): err= 0: pid=81804: Tue Dec 10 11:26:57 2024 00:23:50.836 read: IOPS=4476, BW=17.5MiB/s (18.3MB/s)(35.1MiB/2009msec) 00:23:50.836 slat (usec): min=2, max=403, avg= 3.62, stdev= 5.44 00:23:50.836 clat (usec): min=4429, max=26492, avg=14949.39, stdev=1349.23 00:23:50.836 lat (usec): min=4448, max=26495, avg=14953.01, stdev=1348.77 00:23:50.836 clat percentiles (usec): 00:23:50.836 | 1.00th=[12256], 5.00th=[13042], 10.00th=[13435], 20.00th=[13960], 00:23:50.836 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:23:50.836 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:23:50.836 | 99.00th=[18220], 99.50th=[19268], 99.90th=[25035], 99.95th=[25297], 00:23:50.836 | 99.99th=[26608] 00:23:50.836 bw ( KiB/s): min=17048, max=18168, per=99.60%, avg=17836.00, stdev=527.62, samples=4 00:23:50.836 iops : min= 4262, max= 4542, avg=4459.00, stdev=131.90, samples=4 00:23:50.836 write: IOPS=4470, BW=17.5MiB/s (18.3MB/s)(35.1MiB/2009msec); 0 zone resets 00:23:50.836 slat (usec): min=2, max=171, avg= 3.73, stdev= 2.42 00:23:50.836 clat (usec): min=2853, max=22992, avg=13504.33, stdev=1199.07 00:23:50.836 lat (usec): min=2868, max=22995, avg=13508.07, stdev=1198.73 00:23:50.836 clat percentiles (usec): 00:23:50.836 | 1.00th=[11076], 5.00th=[11863], 10.00th=[12125], 20.00th=[12649], 00:23:50.836 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:23:50.836 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:23:50.836 | 99.00th=[16450], 99.50th=[16909], 99.90th=[19006], 99.95th=[22414], 00:23:50.836 | 99.99th=[22938] 00:23:50.836 bw ( KiB/s): min=17728, max=18008, per=99.88%, avg=17862.00, stdev=114.52, samples=4 00:23:50.836 iops : min= 4432, max= 4502, avg=4465.50, stdev=28.63, samples=4 00:23:50.836 lat (msec) : 4=0.02%, 10=0.34%, 20=99.42%, 50=0.23% 00:23:50.836 cpu : usr=74.90%, sys=19.97%, ctx=4, majf=0, minf=1554 00:23:50.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:50.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:50.836 issued rwts: total=8994,8982,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:50.836 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:50.836 00:23:50.836 Run status group 0 (all jobs): 00:23:50.836 READ: bw=17.5MiB/s (18.3MB/s), 17.5MiB/s-17.5MiB/s (18.3MB/s-18.3MB/s), io=35.1MiB (36.8MB), run=2009-2009msec 00:23:50.836 WRITE: bw=17.5MiB/s (18.3MB/s), 17.5MiB/s-17.5MiB/s (18.3MB/s-18.3MB/s), io=35.1MiB (36.8MB), run=2009-2009msec 00:23:50.836 ----------------------------------------------------- 00:23:50.836 Suppressions used: 00:23:50.836 count bytes template 00:23:50.836 1 58 /usr/src/fio/parse.c 00:23:50.836 1 8 libtcmalloc_minimal.so 00:23:50.836 ----------------------------------------------------- 00:23:50.836 00:23:50.836 11:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:51.404 11:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:23:51.404 11:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:23:51.662 11:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:23:51.921 11:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:23:52.203 11:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:23:52.462 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:53.029 rmmod nvme_tcp 00:23:53.029 rmmod nvme_fabrics 00:23:53.029 rmmod nvme_keyring 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 81501 ']' 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 81501 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 81501 ']' 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 81501 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81501 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:53.029 killing process with pid 81501 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81501' 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 81501 00:23:53.029 11:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 81501 00:23:54.405 11:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:54.405 11:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:54.405 11:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:54.405 11:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:54.405 11:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:54.405 11:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:54.405 11:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:54.405 11:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:54.405 11:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:54.405 11:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.405 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:23:54.664 00:23:54.664 real 0m23.573s 00:23:54.664 user 1m41.385s 00:23:54.664 sys 0m4.965s 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.664 ************************************ 00:23:54.664 END TEST nvmf_fio_host 00:23:54.664 ************************************ 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.664 ************************************ 00:23:54.664 START TEST nvmf_failover 00:23:54.664 ************************************ 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:54.664 * Looking for test storage... 00:23:54.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:54.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.664 --rc genhtml_branch_coverage=1 00:23:54.664 --rc genhtml_function_coverage=1 00:23:54.664 --rc genhtml_legend=1 00:23:54.664 --rc geninfo_all_blocks=1 00:23:54.664 --rc geninfo_unexecuted_blocks=1 00:23:54.664 00:23:54.664 ' 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:54.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.664 --rc genhtml_branch_coverage=1 00:23:54.664 --rc genhtml_function_coverage=1 00:23:54.664 --rc genhtml_legend=1 00:23:54.664 --rc geninfo_all_blocks=1 00:23:54.664 --rc geninfo_unexecuted_blocks=1 00:23:54.664 00:23:54.664 ' 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:54.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.664 --rc genhtml_branch_coverage=1 00:23:54.664 --rc genhtml_function_coverage=1 00:23:54.664 --rc genhtml_legend=1 00:23:54.664 --rc geninfo_all_blocks=1 00:23:54.664 --rc geninfo_unexecuted_blocks=1 00:23:54.664 00:23:54.664 ' 00:23:54.664 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:54.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.664 --rc genhtml_branch_coverage=1 00:23:54.664 --rc genhtml_function_coverage=1 00:23:54.664 --rc genhtml_legend=1 00:23:54.664 --rc geninfo_all_blocks=1 00:23:54.664 --rc geninfo_unexecuted_blocks=1 00:23:54.664 00:23:54.664 ' 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.924 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:54.924 Cannot find device "nvmf_init_br" 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:54.924 Cannot find device "nvmf_init_br2" 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:54.924 Cannot find device "nvmf_tgt_br" 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:54.924 Cannot find device "nvmf_tgt_br2" 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:54.924 Cannot find device "nvmf_init_br" 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:23:54.924 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:54.924 Cannot find device "nvmf_init_br2" 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:54.925 Cannot find device "nvmf_tgt_br" 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:54.925 Cannot find device "nvmf_tgt_br2" 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:54.925 Cannot find device "nvmf_br" 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:54.925 Cannot find device "nvmf_init_if" 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:54.925 Cannot find device "nvmf_init_if2" 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:54.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:54.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:54.925 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:55.184 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:55.184 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:23:55.184 00:23:55.184 --- 10.0.0.3 ping statistics --- 00:23:55.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.184 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:55.184 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:55.184 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:23:55.184 00:23:55.184 --- 10.0.0.4 ping statistics --- 00:23:55.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.184 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:55.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:23:55.184 00:23:55.184 --- 10.0.0.1 ping statistics --- 00:23:55.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.184 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:55.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:23:55.184 00:23:55.184 --- 10.0.0.2 ping statistics --- 00:23:55.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.184 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=82111 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 82111 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 82111 ']' 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.184 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:55.443 [2024-12-10 11:27:02.077493] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:23:55.443 [2024-12-10 11:27:02.077677] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.701 [2024-12-10 11:27:02.272767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:55.701 [2024-12-10 11:27:02.405771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.701 [2024-12-10 11:27:02.405878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.701 [2024-12-10 11:27:02.405901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.701 [2024-12-10 11:27:02.405916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.701 [2024-12-10 11:27:02.405934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.701 [2024-12-10 11:27:02.408289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.701 [2024-12-10 11:27:02.408423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.701 [2024-12-10 11:27:02.408459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:55.959 [2024-12-10 11:27:02.634574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:56.525 11:27:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.525 11:27:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:56.525 11:27:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:56.525 11:27:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:56.525 11:27:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:56.525 11:27:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.525 11:27:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:56.783 [2024-12-10 11:27:03.438121] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.783 11:27:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:57.042 Malloc0 00:23:57.042 11:27:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.316 11:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.575 11:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:57.833 [2024-12-10 11:27:04.584013] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:57.833 11:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:58.091 [2024-12-10 11:27:04.888178] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:58.091 11:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:23:58.658 [2024-12-10 11:27:05.200696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:23:58.658 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=82174 00:23:58.658 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:58.658 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:58.658 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 82174 /var/tmp/bdevperf.sock 00:23:58.658 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 82174 ']' 00:23:58.658 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.658 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.658 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.658 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.658 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:59.592 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.592 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:59.592 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:00.158 NVMe0n1 00:24:00.158 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:00.416 00:24:00.416 11:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=82198 00:24:00.416 11:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:00.416 11:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:01.349 11:27:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:01.607 11:27:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:04.890 11:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:05.149 00:24:05.149 11:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:05.407 11:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:08.689 11:27:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:08.689 [2024-12-10 11:27:15.474864] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:08.689 11:27:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:10.060 11:27:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:24:10.060 11:27:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 82198 00:24:16.663 { 00:24:16.663 "results": [ 00:24:16.663 { 00:24:16.663 "job": "NVMe0n1", 00:24:16.663 "core_mask": "0x1", 00:24:16.664 "workload": "verify", 00:24:16.664 "status": "finished", 00:24:16.664 "verify_range": { 00:24:16.664 "start": 0, 00:24:16.664 "length": 16384 00:24:16.664 }, 00:24:16.664 "queue_depth": 128, 00:24:16.664 "io_size": 4096, 00:24:16.664 "runtime": 15.020438, 00:24:16.664 "iops": 6746.074914726189, 00:24:16.664 "mibps": 26.351855135649174, 00:24:16.664 "io_failed": 3365, 00:24:16.664 "io_timeout": 0, 00:24:16.664 "avg_latency_us": 18324.105472120482, 00:24:16.664 "min_latency_us": 860.16, 00:24:16.664 "max_latency_us": 20137.425454545453 00:24:16.664 } 00:24:16.664 ], 00:24:16.664 "core_count": 1 00:24:16.664 } 00:24:16.664 11:27:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 82174 00:24:16.664 11:27:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 82174 ']' 00:24:16.664 11:27:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 82174 00:24:16.664 11:27:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:16.664 11:27:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.664 11:27:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82174 00:24:16.664 killing process with pid 82174 00:24:16.664 11:27:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:16.664 11:27:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:16.664 11:27:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82174' 00:24:16.664 11:27:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 82174 00:24:16.664 11:27:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 82174 00:24:16.664 11:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:16.664 [2024-12-10 11:27:05.322659] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:16.664 [2024-12-10 11:27:05.322834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82174 ] 00:24:16.664 [2024-12-10 11:27:05.498590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.664 [2024-12-10 11:27:05.625758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.664 [2024-12-10 11:27:05.828282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:16.664 Running I/O for 15 seconds... 00:24:16.664 5397.00 IOPS, 21.08 MiB/s [2024-12-10T11:27:23.490Z] [2024-12-10 11:27:08.405765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.405866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.405914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.405949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.405975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.405999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.406955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.406977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.407000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.407033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.407060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.407082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.407105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.407127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.407158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.407180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.407203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.407225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.407253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.407279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.664 [2024-12-10 11:27:08.407303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.664 [2024-12-10 11:27:08.407326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.407361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.407388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.407413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.407436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.407463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.407488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.407512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.407536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.407561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.407585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.407608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.407631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.407653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.407697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.407726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.407750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.407773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.407796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.407818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.407841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.407866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.407889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.407912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.407935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.407958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.407981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.408024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.408072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.408117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.408166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.408232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.408287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.408344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.408417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.408465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.408510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.408556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.408608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.408657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.408707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.408754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.408800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.408845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.408891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.408936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.408975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.408999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.409022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.409045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.409068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.665 [2024-12-10 11:27:08.409092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.409123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.409147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.409171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.409195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.409218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.409241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.409263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.409288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.409311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.409334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.665 [2024-12-10 11:27:08.409374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.665 [2024-12-10 11:27:08.409400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.409424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.409447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.409470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.666 [2024-12-10 11:27:08.409495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.409518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.409541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.409564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.409595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.409619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.409649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.409675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.409697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.409720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.409742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.409765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.409787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.409810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.409833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.409856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.409883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.409909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.409939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.409964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.409990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.410952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.410975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.411006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.411030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.411055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.411080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.411119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.411145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.411168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.411190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.411228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.411253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.411276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.411300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.411323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.411360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.411389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.666 [2024-12-10 11:27:08.411425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.666 [2024-12-10 11:27:08.411450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.411473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.411509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.411536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.411559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.411582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.411607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.411630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.411653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.411676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.411716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.411741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.411767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.411791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.411813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.411837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.411859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.411882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.411907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.411930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.411952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.411975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.411997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.412020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.412043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.412066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.412088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.412118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.412142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.412165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.412188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.412211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:08.412233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.412255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:24:16.667 [2024-12-10 11:27:08.412287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.667 [2024-12-10 11:27:08.412307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.667 [2024-12-10 11:27:08.412325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51984 len:8 PRP1 0x0 PRP2 0x0 00:24:16.667 [2024-12-10 11:27:08.412344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.412645] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:24:16.667 [2024-12-10 11:27:08.412738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.667 [2024-12-10 11:27:08.412770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.412794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.667 [2024-12-10 11:27:08.412813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.412832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.667 [2024-12-10 11:27:08.412857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.412882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.667 [2024-12-10 11:27:08.412901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:08.412927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:16.667 [2024-12-10 11:27:08.413013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:24:16.667 [2024-12-10 11:27:08.417281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:16.667 [2024-12-10 11:27:08.459080] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:16.667 5826.00 IOPS, 22.76 MiB/s [2024-12-10T11:27:23.493Z] 6209.33 IOPS, 24.26 MiB/s [2024-12-10T11:27:23.493Z] 6417.00 IOPS, 25.07 MiB/s [2024-12-10T11:27:23.493Z] [2024-12-10 11:27:12.117991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.667 [2024-12-10 11:27:12.118096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:12.118165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.667 [2024-12-10 11:27:12.118190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:12.118212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.667 [2024-12-10 11:27:12.118232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:12.118252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.667 [2024-12-10 11:27:12.118271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:12.118291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.667 [2024-12-10 11:27:12.118310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:12.118330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.667 [2024-12-10 11:27:12.118349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:12.118389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.667 [2024-12-10 11:27:12.118412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:12.118433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.667 [2024-12-10 11:27:12.118452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:12.118474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:12.118492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:12.118520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:12.118539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:12.118560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:12.118593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:12.118613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:12.118630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:12.118650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:12.118668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:12.118688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:12.118716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:12.118738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.667 [2024-12-10 11:27:12.118756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.667 [2024-12-10 11:27:12.118794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.118813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.118834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.118853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.118874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.118894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.118915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.118934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.118955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.118973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.118994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.119012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.119052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.119114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.119155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.119195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.119235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.119283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.119326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.119366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.119423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.119482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.119543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.119583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.119623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.119663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.119734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.119775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.119816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.119857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.668 [2024-12-10 11:27:12.119910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.119951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.119972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.119991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.120012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.120031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.120053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.120071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.120092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.120112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.120133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.120163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.120184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.120203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.120225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.120244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.668 [2024-12-10 11:27:12.120266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.668 [2024-12-10 11:27:12.120287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.669 [2024-12-10 11:27:12.120327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.669 [2024-12-10 11:27:12.120383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.669 [2024-12-10 11:27:12.120436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.669 [2024-12-10 11:27:12.120478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.669 [2024-12-10 11:27:12.120518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.669 [2024-12-10 11:27:12.120558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.669 [2024-12-10 11:27:12.120599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.120640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.120680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.120720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.120760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.120814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.120851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.120888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.120927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.120974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.120994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.121013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.121051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.121122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.121162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.121202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.121242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.121294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.121334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.121374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.121429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.121470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.121511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.121561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.121601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.121642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.669 [2024-12-10 11:27:12.121683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.669 [2024-12-10 11:27:12.121724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.669 [2024-12-10 11:27:12.121764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.669 [2024-12-10 11:27:12.121805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.669 [2024-12-10 11:27:12.121846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.669 [2024-12-10 11:27:12.121886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.669 [2024-12-10 11:27:12.121925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.669 [2024-12-10 11:27:12.121966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.121987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.122006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.669 [2024-12-10 11:27:12.122026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.669 [2024-12-10 11:27:12.122053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.122095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.122135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.122175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.122216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.122256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.122314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.122371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.122417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.122458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.122498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.122539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.122580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.122632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.122673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-12-10 11:27:12.122714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-12-10 11:27:12.122755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-12-10 11:27:12.122795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-12-10 11:27:12.122836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-12-10 11:27:12.122877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-12-10 11:27:12.122917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-12-10 11:27:12.122957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.122979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-12-10 11:27:12.122999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.123021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.123040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.123062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.123081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.123103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.123122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.123152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.123173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.123194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.123213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.123234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.123254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.123275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.670 [2024-12-10 11:27:12.123294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.123315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ba00 is same with the state(6) to be set 00:24:16.670 [2024-12-10 11:27:12.123339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.670 [2024-12-10 11:27:12.123371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.670 [2024-12-10 11:27:12.123390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4296 len:8 PRP1 0x0 PRP2 0x0 00:24:16.670 [2024-12-10 11:27:12.123409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.123431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.670 [2024-12-10 11:27:12.123447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.670 [2024-12-10 11:27:12.123462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4688 len:8 PRP1 0x0 PRP2 0x0 00:24:16.670 [2024-12-10 11:27:12.123481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.123500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.670 [2024-12-10 11:27:12.123514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.670 [2024-12-10 11:27:12.123529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4696 len:8 PRP1 0x0 PRP2 0x0 00:24:16.670 [2024-12-10 11:27:12.123547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.123565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.670 [2024-12-10 11:27:12.123580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.670 [2024-12-10 11:27:12.123595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:8 PRP1 0x0 PRP2 0x0 00:24:16.670 [2024-12-10 11:27:12.123614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.123632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.670 [2024-12-10 11:27:12.123647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.670 [2024-12-10 11:27:12.123662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4712 len:8 PRP1 0x0 PRP2 0x0 00:24:16.670 [2024-12-10 11:27:12.123680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.123721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.670 [2024-12-10 11:27:12.123737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.670 [2024-12-10 11:27:12.123752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4720 len:8 PRP1 0x0 PRP2 0x0 00:24:16.670 [2024-12-10 11:27:12.123771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.670 [2024-12-10 11:27:12.123789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.670 [2024-12-10 11:27:12.123804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.670 [2024-12-10 11:27:12.123819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4728 len:8 PRP1 0x0 PRP2 0x0 00:24:16.671 [2024-12-10 11:27:12.123837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:12.123855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.671 [2024-12-10 11:27:12.123870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.671 [2024-12-10 11:27:12.123885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:8 PRP1 0x0 PRP2 0x0 00:24:16.671 [2024-12-10 11:27:12.123904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:12.123922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.671 [2024-12-10 11:27:12.123936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.671 [2024-12-10 11:27:12.123951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4744 len:8 PRP1 0x0 PRP2 0x0 00:24:16.671 [2024-12-10 11:27:12.123969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:12.124233] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:24:16.671 [2024-12-10 11:27:12.124310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.671 [2024-12-10 11:27:12.124340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:12.124381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.671 [2024-12-10 11:27:12.124402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:12.124422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.671 [2024-12-10 11:27:12.124441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:12.124461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.671 [2024-12-10 11:27:12.124479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:12.124498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:16.671 [2024-12-10 11:27:12.124554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:24:16.671 [2024-12-10 11:27:12.128675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:16.671 [2024-12-10 11:27:12.166009] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:16.671 6488.40 IOPS, 25.35 MiB/s [2024-12-10T11:27:23.497Z] 6589.67 IOPS, 25.74 MiB/s [2024-12-10T11:27:23.497Z] 6664.29 IOPS, 26.03 MiB/s [2024-12-10T11:27:23.497Z] 6727.25 IOPS, 26.28 MiB/s [2024-12-10T11:27:23.497Z] 6778.89 IOPS, 26.48 MiB/s [2024-12-10T11:27:23.497Z] [2024-12-10 11:27:16.794289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.671 [2024-12-10 11:27:16.794386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.794427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.671 [2024-12-10 11:27:16.794448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.794468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.671 [2024-12-10 11:27:16.794486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.794506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.671 [2024-12-10 11:27:16.794525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.794543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:24:16.671 [2024-12-10 11:27:16.795502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.795542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.795577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.795599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.795622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.795642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.795664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.795694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.795721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.795741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.795762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.795782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.795803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.795823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.795845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.795887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.795911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.795931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.795953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.795973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.795994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.796033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.796056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.796076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.796098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.796117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.796139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.796158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.796180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.796199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.796220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-12-10 11:27:16.796240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.796263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.671 [2024-12-10 11:27:16.796283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.796305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.671 [2024-12-10 11:27:16.796325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.796361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.671 [2024-12-10 11:27:16.796386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.796409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.671 [2024-12-10 11:27:16.796430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.796467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.671 [2024-12-10 11:27:16.796489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.796512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.671 [2024-12-10 11:27:16.796531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.796553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.671 [2024-12-10 11:27:16.796572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.796594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.671 [2024-12-10 11:27:16.796614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.796636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.671 [2024-12-10 11:27:16.796655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.671 [2024-12-10 11:27:16.796676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.796696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.796718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.796738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.796759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.796778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.796800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.796819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.796840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.796860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.796882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.796901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.796922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.796942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.796963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.672 [2024-12-10 11:27:16.797006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.672 [2024-12-10 11:27:16.797050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.672 [2024-12-10 11:27:16.797091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.672 [2024-12-10 11:27:16.797132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.672 [2024-12-10 11:27:16.797174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.672 [2024-12-10 11:27:16.797216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.672 [2024-12-10 11:27:16.797257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.672 [2024-12-10 11:27:16.797297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.797972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.672 [2024-12-10 11:27:16.797992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.672 [2024-12-10 11:27:16.798013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.798032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.798072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.798123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.798164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.798203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.798244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.798284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.798325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.798382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.798424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.798464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.798505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.798547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.798587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.798640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.798683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.798723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.798764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.798820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.798862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.798903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.798944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.798965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.798985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.799026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.673 [2024-12-10 11:27:16.799746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.673 [2024-12-10 11:27:16.799789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.673 [2024-12-10 11:27:16.799810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.674 [2024-12-10 11:27:16.799830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.799851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.674 [2024-12-10 11:27:16.799871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.799893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.674 [2024-12-10 11:27:16.799912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.799934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.674 [2024-12-10 11:27:16.799954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.799975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.674 [2024-12-10 11:27:16.799994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.674 [2024-12-10 11:27:16.800034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.674 [2024-12-10 11:27:16.800075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.674 [2024-12-10 11:27:16.800115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.674 [2024-12-10 11:27:16.800155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.674 [2024-12-10 11:27:16.800196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.674 [2024-12-10 11:27:16.800242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.674 [2024-12-10 11:27:16.800290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.674 [2024-12-10 11:27:16.800332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.674 [2024-12-10 11:27:16.800388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:24:16.674 [2024-12-10 11:27:16.800433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.800449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.800466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15352 len:8 PRP1 0x0 PRP2 0x0 00:24:16.674 [2024-12-10 11:27:16.800485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.800520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.800535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:8 PRP1 0x0 PRP2 0x0 00:24:16.674 [2024-12-10 11:27:16.800553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.800587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.800602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15752 len:8 PRP1 0x0 PRP2 0x0 00:24:16.674 [2024-12-10 11:27:16.800620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.800652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.800667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15760 len:8 PRP1 0x0 PRP2 0x0 00:24:16.674 [2024-12-10 11:27:16.800686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.800718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.800733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15768 len:8 PRP1 0x0 PRP2 0x0 00:24:16.674 [2024-12-10 11:27:16.800751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.800783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.800798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:8 PRP1 0x0 PRP2 0x0 00:24:16.674 [2024-12-10 11:27:16.800825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.800859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.800874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15784 len:8 PRP1 0x0 PRP2 0x0 00:24:16.674 [2024-12-10 11:27:16.800893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.800925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.800940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15792 len:8 PRP1 0x0 PRP2 0x0 00:24:16.674 [2024-12-10 11:27:16.800958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.800976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.800990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.801005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15800 len:8 PRP1 0x0 PRP2 0x0 00:24:16.674 [2024-12-10 11:27:16.801023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.801041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.801055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.801071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:8 PRP1 0x0 PRP2 0x0 00:24:16.674 [2024-12-10 11:27:16.801090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.801108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.801122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.801137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15816 len:8 PRP1 0x0 PRP2 0x0 00:24:16.674 [2024-12-10 11:27:16.801155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.801174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.801188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.801203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15824 len:8 PRP1 0x0 PRP2 0x0 00:24:16.674 [2024-12-10 11:27:16.801221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.801239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.801254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.801269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15832 len:8 PRP1 0x0 PRP2 0x0 00:24:16.674 [2024-12-10 11:27:16.801287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.801305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.801319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.801373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:8 PRP1 0x0 PRP2 0x0 00:24:16.674 [2024-12-10 11:27:16.801395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.801415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.801431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.801446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15848 len:8 PRP1 0x0 PRP2 0x0 00:24:16.674 [2024-12-10 11:27:16.801464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.674 [2024-12-10 11:27:16.801482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.674 [2024-12-10 11:27:16.801497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.674 [2024-12-10 11:27:16.801512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15856 len:8 PRP1 0x0 PRP2 0x0 00:24:16.675 [2024-12-10 11:27:16.801530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.675 [2024-12-10 11:27:16.801548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.675 [2024-12-10 11:27:16.801563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.675 [2024-12-10 11:27:16.801577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15864 len:8 PRP1 0x0 PRP2 0x0 00:24:16.675 [2024-12-10 11:27:16.801596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.675 [2024-12-10 11:27:16.801860] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:24:16.675 [2024-12-10 11:27:16.801896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:16.675 [2024-12-10 11:27:16.806035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:16.675 [2024-12-10 11:27:16.806093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:24:16.675 [2024-12-10 11:27:16.842860] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:16.675 6782.50 IOPS, 26.49 MiB/s [2024-12-10T11:27:23.501Z] 6810.27 IOPS, 26.60 MiB/s [2024-12-10T11:27:23.501Z] 6838.75 IOPS, 26.71 MiB/s [2024-12-10T11:27:23.501Z] 6811.77 IOPS, 26.61 MiB/s [2024-12-10T11:27:23.501Z] 6781.79 IOPS, 26.49 MiB/s [2024-12-10T11:27:23.501Z] 6746.73 IOPS, 26.35 MiB/s 00:24:16.675 Latency(us) 00:24:16.675 [2024-12-10T11:27:23.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.675 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:16.675 Verification LBA range: start 0x0 length 0x4000 00:24:16.675 NVMe0n1 : 15.02 6746.07 26.35 224.03 0.00 18324.11 860.16 20137.43 00:24:16.675 [2024-12-10T11:27:23.501Z] =================================================================================================================== 00:24:16.675 [2024-12-10T11:27:23.501Z] Total : 6746.07 26.35 224.03 0.00 18324.11 860.16 20137.43 00:24:16.675 Received shutdown signal, test time was about 15.000000 seconds 00:24:16.675 00:24:16.675 Latency(us) 00:24:16.675 [2024-12-10T11:27:23.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.675 [2024-12-10T11:27:23.501Z] =================================================================================================================== 00:24:16.675 [2024-12-10T11:27:23.501Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.675 11:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:16.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:16.675 11:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:16.675 11:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:16.675 11:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=82378 00:24:16.675 11:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:16.675 11:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 82378 /var/tmp/bdevperf.sock 00:24:16.675 11:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 82378 ']' 00:24:16.675 11:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:16.675 11:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.675 11:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:16.675 11:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.675 11:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:18.050 11:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.050 11:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:18.050 11:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:18.308 [2024-12-10 11:27:24.911052] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:18.308 11:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:24:18.567 [2024-12-10 11:27:25.195222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:24:18.567 11:27:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:18.824 NVMe0n1 00:24:18.824 11:27:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:19.425 00:24:19.425 11:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:19.683 00:24:19.683 11:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:19.683 11:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:19.941 11:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:20.199 11:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:23.482 11:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:23.482 11:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:23.741 11:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=82455 00:24:23.741 11:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:23.741 11:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 82455 00:24:24.677 { 00:24:24.677 "results": [ 00:24:24.677 { 00:24:24.677 "job": "NVMe0n1", 00:24:24.677 "core_mask": "0x1", 00:24:24.677 "workload": "verify", 00:24:24.677 "status": "finished", 00:24:24.677 "verify_range": { 00:24:24.677 "start": 0, 00:24:24.677 "length": 16384 00:24:24.677 }, 00:24:24.677 "queue_depth": 128, 00:24:24.677 "io_size": 4096, 00:24:24.677 "runtime": 1.012359, 00:24:24.677 "iops": 6554.986916696547, 00:24:24.677 "mibps": 25.605417643345888, 00:24:24.677 "io_failed": 0, 00:24:24.677 "io_timeout": 0, 00:24:24.677 "avg_latency_us": 19386.250836758183, 00:24:24.677 "min_latency_us": 1601.1636363636364, 00:24:24.677 "max_latency_us": 22401.396363636362 00:24:24.677 } 00:24:24.677 ], 00:24:24.677 "core_count": 1 00:24:24.677 } 00:24:24.677 11:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:24.677 [2024-12-10 11:27:23.539066] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:24.677 [2024-12-10 11:27:23.539278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82378 ] 00:24:24.677 [2024-12-10 11:27:23.738807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.677 [2024-12-10 11:27:23.887843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.677 [2024-12-10 11:27:24.119429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:24.677 [2024-12-10 11:27:26.945936] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:24:24.677 [2024-12-10 11:27:26.946101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.677 [2024-12-10 11:27:26.946148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.677 [2024-12-10 11:27:26.946181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.677 [2024-12-10 11:27:26.946202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.677 [2024-12-10 11:27:26.946224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.677 [2024-12-10 11:27:26.946243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.677 [2024-12-10 11:27:26.946265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.677 [2024-12-10 11:27:26.946284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.677 [2024-12-10 11:27:26.946312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:24.677 [2024-12-10 11:27:26.946419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:24.677 [2024-12-10 11:27:26.946477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:24:24.677 [2024-12-10 11:27:26.958678] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:24.677 Running I/O for 1 seconds... 00:24:24.677 6492.00 IOPS, 25.36 MiB/s 00:24:24.677 Latency(us) 00:24:24.677 [2024-12-10T11:27:31.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.677 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:24.677 Verification LBA range: start 0x0 length 0x4000 00:24:24.677 NVMe0n1 : 1.01 6554.99 25.61 0.00 0.00 19386.25 1601.16 22401.40 00:24:24.677 [2024-12-10T11:27:31.503Z] =================================================================================================================== 00:24:24.677 [2024-12-10T11:27:31.503Z] Total : 6554.99 25.61 0.00 0.00 19386.25 1601.16 22401.40 00:24:24.677 11:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:24.677 11:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:25.244 11:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:25.502 11:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:25.502 11:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:25.760 11:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:26.018 11:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:29.314 11:27:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:29.314 11:27:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:29.314 11:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 82378 00:24:29.314 11:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 82378 ']' 00:24:29.314 11:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 82378 00:24:29.314 11:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:29.314 11:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.314 11:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82378 00:24:29.314 killing process with pid 82378 00:24:29.314 11:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:29.314 11:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:29.314 11:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82378' 00:24:29.314 11:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 82378 00:24:29.314 11:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 82378 00:24:30.263 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:30.522 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:30.781 rmmod nvme_tcp 00:24:30.781 rmmod nvme_fabrics 00:24:30.781 rmmod nvme_keyring 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 82111 ']' 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 82111 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 82111 ']' 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 82111 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82111 00:24:30.781 killing process with pid 82111 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82111' 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 82111 00:24:30.781 11:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 82111 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:24:32.157 00:24:32.157 real 0m37.656s 00:24:32.157 user 2m24.155s 00:24:32.157 sys 0m6.036s 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.157 11:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:32.157 ************************************ 00:24:32.157 END TEST nvmf_failover 00:24:32.157 ************************************ 00:24:32.417 11:27:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:32.417 11:27:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:32.417 11:27:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.417 11:27:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.417 ************************************ 00:24:32.417 START TEST nvmf_host_discovery 00:24:32.417 ************************************ 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:32.417 * Looking for test storage... 00:24:32.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.417 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:32.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.418 --rc genhtml_branch_coverage=1 00:24:32.418 --rc genhtml_function_coverage=1 00:24:32.418 --rc genhtml_legend=1 00:24:32.418 --rc geninfo_all_blocks=1 00:24:32.418 --rc geninfo_unexecuted_blocks=1 00:24:32.418 00:24:32.418 ' 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:32.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.418 --rc genhtml_branch_coverage=1 00:24:32.418 --rc genhtml_function_coverage=1 00:24:32.418 --rc genhtml_legend=1 00:24:32.418 --rc geninfo_all_blocks=1 00:24:32.418 --rc geninfo_unexecuted_blocks=1 00:24:32.418 00:24:32.418 ' 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:32.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.418 --rc genhtml_branch_coverage=1 00:24:32.418 --rc genhtml_function_coverage=1 00:24:32.418 --rc genhtml_legend=1 00:24:32.418 --rc geninfo_all_blocks=1 00:24:32.418 --rc geninfo_unexecuted_blocks=1 00:24:32.418 00:24:32.418 ' 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:32.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.418 --rc genhtml_branch_coverage=1 00:24:32.418 --rc genhtml_function_coverage=1 00:24:32.418 --rc genhtml_legend=1 00:24:32.418 --rc geninfo_all_blocks=1 00:24:32.418 --rc geninfo_unexecuted_blocks=1 00:24:32.418 00:24:32.418 ' 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:32.418 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:32.418 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:32.419 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:32.419 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:32.419 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:32.419 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.419 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:32.419 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:32.419 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:32.419 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:32.419 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:32.419 Cannot find device "nvmf_init_br" 00:24:32.419 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:24:32.419 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:32.678 Cannot find device "nvmf_init_br2" 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:32.678 Cannot find device "nvmf_tgt_br" 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:32.678 Cannot find device "nvmf_tgt_br2" 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:32.678 Cannot find device "nvmf_init_br" 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:32.678 Cannot find device "nvmf_init_br2" 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:32.678 Cannot find device "nvmf_tgt_br" 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:32.678 Cannot find device "nvmf_tgt_br2" 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:32.678 Cannot find device "nvmf_br" 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:32.678 Cannot find device "nvmf_init_if" 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:32.678 Cannot find device "nvmf_init_if2" 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:32.678 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:32.678 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:32.678 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:32.937 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:32.937 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:32.937 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:32.937 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:32.937 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:32.937 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:32.937 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:32.937 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:32.937 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:32.937 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:32.937 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:32.937 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:32.937 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:32.937 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:32.937 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:24:32.937 00:24:32.937 --- 10.0.0.3 ping statistics --- 00:24:32.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.937 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:24:32.937 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:32.937 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:32.937 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:24:32.937 00:24:32.937 --- 10.0.0.4 ping statistics --- 00:24:32.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.937 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:32.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:24:32.938 00:24:32.938 --- 10.0.0.1 ping statistics --- 00:24:32.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.938 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:32.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:24:32.938 00:24:32.938 --- 10.0.0.2 ping statistics --- 00:24:32.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.938 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=82804 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 82804 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 82804 ']' 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.938 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.197 [2024-12-10 11:27:39.769525] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:33.197 [2024-12-10 11:27:39.769674] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.197 [2024-12-10 11:27:39.952634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.456 [2024-12-10 11:27:40.093935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.456 [2024-12-10 11:27:40.094032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.456 [2024-12-10 11:27:40.094082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.456 [2024-12-10 11:27:40.094135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.456 [2024-12-10 11:27:40.094163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.456 [2024-12-10 11:27:40.095924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.714 [2024-12-10 11:27:40.292118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:33.974 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.974 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:33.974 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:33.974 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:33.974 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.974 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.974 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:33.974 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.974 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.974 [2024-12-10 11:27:40.796782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.233 [2024-12-10 11:27:40.804934] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.233 null0 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.233 null1 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=82836 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 82836 /tmp/host.sock 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 82836 ']' 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.233 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.233 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.233 [2024-12-10 11:27:40.955614] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:34.233 [2024-12-10 11:27:40.955805] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82836 ] 00:24:34.493 [2024-12-10 11:27:41.143078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.493 [2024-12-10 11:27:41.267947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.752 [2024-12-10 11:27:41.448744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:35.319 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:35.319 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.578 [2024-12-10 11:27:42.245535] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:35.578 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.579 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:35.837 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:36.403 [2024-12-10 11:27:42.921619] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:36.403 [2024-12-10 11:27:42.921675] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:36.403 [2024-12-10 11:27:42.921723] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:36.404 [2024-12-10 11:27:42.927710] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:24:36.404 [2024-12-10 11:27:42.990367] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:24:36.404 [2024-12-10 11:27:42.991904] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b280:1 started. 00:24:36.404 [2024-12-10 11:27:42.994240] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:24:36.404 [2024-12-10 11:27:42.994281] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:24:36.404 [2024-12-10 11:27:43.000463] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b280 was disconnected and freed. delete nvme_qpair. 00:24:36.662 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.662 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:36.662 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:36.662 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.662 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.662 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.662 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.662 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.662 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.662 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:36.922 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.923 [2024-12-10 11:27:43.693901] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:24:36.923 [2024-12-10 11:27:43.700762] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.923 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.182 [2024-12-10 11:27:43.808850] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:37.182 [2024-12-10 11:27:43.809241] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:37.182 [2024-12-10 11:27:43.809308] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:37.182 [2024-12-10 11:27:43.815244] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:37.182 [2024-12-10 11:27:43.874963] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:24:37.182 [2024-12-10 11:27:43.875051] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:24:37.182 [2024-12-10 11:27:43.875073] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:24:37.182 [2024-12-10 11:27:43.875084] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:37.182 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.182 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:37.183 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:37.183 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:37.183 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:37.183 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:37.183 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:37.183 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:37.183 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:37.183 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:37.183 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.442 [2024-12-10 11:27:44.070073] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:37.442 [2024-12-10 11:27:44.070138] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:37.442 [2024-12-10 11:27:44.071104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.442 [2024-12-10 11:27:44.071168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.442 [2024-12-10 11:27:44.071189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.442 [2024-12-10 11:27:44.071204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.442 [2024-12-10 11:27:44.071218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.442 [2024-12-10 11:27:44.071231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.442 [2024-12-10 11:27:44.071247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.442 [2024-12-10 11:27:44.071260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.442 [2024-12-10 11:27:44.071274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:37.442 [2024-12-10 11:27:44.076053] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:24:37.442 [2024-12-10 11:27:44.076107] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:24:37.442 [2024-12-10 11:27:44.076232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:37.442 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:37.443 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:37.443 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:37.443 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:37.443 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:37.443 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:37.443 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:37.443 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:37.443 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.443 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.443 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.702 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.078 [2024-12-10 11:27:45.472482] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:39.078 [2024-12-10 11:27:45.472529] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:39.078 [2024-12-10 11:27:45.472572] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:39.078 [2024-12-10 11:27:45.478560] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:24:39.078 [2024-12-10 11:27:45.545224] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:24:39.078 [2024-12-10 11:27:45.546922] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x61500002c680:1 started. 00:24:39.078 [2024-12-10 11:27:45.549923] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:24:39.078 [2024-12-10 11:27:45.550149] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:39.078 [2024-12-10 11:27:45.552692] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x61500002c680 was disconnected and freed. delete nvme_qpair. 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.078 request: 00:24:39.078 { 00:24:39.078 "name": "nvme", 00:24:39.078 "trtype": "tcp", 00:24:39.078 "traddr": "10.0.0.3", 00:24:39.078 "adrfam": "ipv4", 00:24:39.078 "trsvcid": "8009", 00:24:39.078 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:39.078 "wait_for_attach": true, 00:24:39.078 "method": "bdev_nvme_start_discovery", 00:24:39.078 "req_id": 1 00:24:39.078 } 00:24:39.078 Got JSON-RPC error response 00:24:39.078 response: 00:24:39.078 { 00:24:39.078 "code": -17, 00:24:39.078 "message": "File exists" 00:24:39.078 } 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.078 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.079 request: 00:24:39.079 { 00:24:39.079 "name": "nvme_second", 00:24:39.079 "trtype": "tcp", 00:24:39.079 "traddr": "10.0.0.3", 00:24:39.079 "adrfam": "ipv4", 00:24:39.079 "trsvcid": "8009", 00:24:39.079 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:39.079 "wait_for_attach": true, 00:24:39.079 "method": "bdev_nvme_start_discovery", 00:24:39.079 "req_id": 1 00:24:39.079 } 00:24:39.079 Got JSON-RPC error response 00:24:39.079 response: 00:24:39.079 { 00:24:39.079 "code": -17, 00:24:39.079 "message": "File exists" 00:24:39.079 } 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.079 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.015 [2024-12-10 11:27:46.834963] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.015 [2024-12-10 11:27:46.835084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c900 with addr=10.0.0.3, port=8010 00:24:40.015 [2024-12-10 11:27:46.835151] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:40.015 [2024-12-10 11:27:46.835168] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:40.015 [2024-12-10 11:27:46.835183] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:24:41.391 [2024-12-10 11:27:47.834994] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.391 [2024-12-10 11:27:47.835075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002cb80 with addr=10.0.0.3, port=8010 00:24:41.391 [2024-12-10 11:27:47.835136] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:41.391 [2024-12-10 11:27:47.835152] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:41.391 [2024-12-10 11:27:47.835165] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:24:42.326 [2024-12-10 11:27:48.834714] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:24:42.326 request: 00:24:42.326 { 00:24:42.326 "name": "nvme_second", 00:24:42.326 "trtype": "tcp", 00:24:42.326 "traddr": "10.0.0.3", 00:24:42.326 "adrfam": "ipv4", 00:24:42.326 "trsvcid": "8010", 00:24:42.326 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:42.326 "wait_for_attach": false, 00:24:42.326 "attach_timeout_ms": 3000, 00:24:42.326 "method": "bdev_nvme_start_discovery", 00:24:42.326 "req_id": 1 00:24:42.326 } 00:24:42.326 Got JSON-RPC error response 00:24:42.326 response: 00:24:42.326 { 00:24:42.326 "code": -110, 00:24:42.326 "message": "Connection timed out" 00:24:42.326 } 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 82836 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:42.326 rmmod nvme_tcp 00:24:42.326 rmmod nvme_fabrics 00:24:42.326 rmmod nvme_keyring 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 82804 ']' 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 82804 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 82804 ']' 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 82804 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.326 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82804 00:24:42.326 killing process with pid 82804 00:24:42.326 11:27:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:42.326 11:27:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:42.326 11:27:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82804' 00:24:42.326 11:27:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 82804 00:24:42.326 11:27:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 82804 00:24:43.262 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.262 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.262 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.262 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:43.262 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.262 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:43.262 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.262 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.262 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:43.262 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:43.262 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:43.262 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:43.262 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:24:43.520 00:24:43.520 real 0m11.266s 00:24:43.520 user 0m20.981s 00:24:43.520 sys 0m2.230s 00:24:43.520 ************************************ 00:24:43.520 END TEST nvmf_host_discovery 00:24:43.520 ************************************ 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.520 ************************************ 00:24:43.520 START TEST nvmf_host_multipath_status 00:24:43.520 ************************************ 00:24:43.520 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:43.784 * Looking for test storage... 00:24:43.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:43.784 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:43.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.784 --rc genhtml_branch_coverage=1 00:24:43.784 --rc genhtml_function_coverage=1 00:24:43.785 --rc genhtml_legend=1 00:24:43.785 --rc geninfo_all_blocks=1 00:24:43.785 --rc geninfo_unexecuted_blocks=1 00:24:43.785 00:24:43.785 ' 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:43.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.785 --rc genhtml_branch_coverage=1 00:24:43.785 --rc genhtml_function_coverage=1 00:24:43.785 --rc genhtml_legend=1 00:24:43.785 --rc geninfo_all_blocks=1 00:24:43.785 --rc geninfo_unexecuted_blocks=1 00:24:43.785 00:24:43.785 ' 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:43.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.785 --rc genhtml_branch_coverage=1 00:24:43.785 --rc genhtml_function_coverage=1 00:24:43.785 --rc genhtml_legend=1 00:24:43.785 --rc geninfo_all_blocks=1 00:24:43.785 --rc geninfo_unexecuted_blocks=1 00:24:43.785 00:24:43.785 ' 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:43.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.785 --rc genhtml_branch_coverage=1 00:24:43.785 --rc genhtml_function_coverage=1 00:24:43.785 --rc genhtml_legend=1 00:24:43.785 --rc geninfo_all_blocks=1 00:24:43.785 --rc geninfo_unexecuted_blocks=1 00:24:43.785 00:24:43.785 ' 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:43.785 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:43.785 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:43.786 Cannot find device "nvmf_init_br" 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:43.786 Cannot find device "nvmf_init_br2" 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:43.786 Cannot find device "nvmf_tgt_br" 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:43.786 Cannot find device "nvmf_tgt_br2" 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:43.786 Cannot find device "nvmf_init_br" 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:24:43.786 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:43.786 Cannot find device "nvmf_init_br2" 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:44.045 Cannot find device "nvmf_tgt_br" 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:44.045 Cannot find device "nvmf_tgt_br2" 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:44.045 Cannot find device "nvmf_br" 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:44.045 Cannot find device "nvmf_init_if" 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:44.045 Cannot find device "nvmf_init_if2" 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:44.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:44.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:44.045 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:44.045 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:24:44.045 00:24:44.045 --- 10.0.0.3 ping statistics --- 00:24:44.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.045 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:24:44.045 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:44.304 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:44.304 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:24:44.304 00:24:44.304 --- 10.0.0.4 ping statistics --- 00:24:44.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.304 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:44.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:24:44.304 00:24:44.304 --- 10.0.0.1 ping statistics --- 00:24:44.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.304 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:44.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:24:44.304 00:24:44.304 --- 10.0.0.2 ping statistics --- 00:24:44.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.304 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=83351 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 83351 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 83351 ']' 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.304 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:44.304 [2024-12-10 11:27:51.036831] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:24:44.304 [2024-12-10 11:27:51.036989] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.562 [2024-12-10 11:27:51.219769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:44.562 [2024-12-10 11:27:51.345125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.562 [2024-12-10 11:27:51.345194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.562 [2024-12-10 11:27:51.345217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.562 [2024-12-10 11:27:51.345246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.562 [2024-12-10 11:27:51.345263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.562 [2024-12-10 11:27:51.347367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.562 [2024-12-10 11:27:51.347396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.819 [2024-12-10 11:27:51.568151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:45.385 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.385 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:45.385 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:45.385 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:45.385 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:45.385 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.385 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=83351 00:24:45.385 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:45.644 [2024-12-10 11:27:52.399139] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.644 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:46.211 Malloc0 00:24:46.211 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:46.469 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:46.727 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:46.986 [2024-12-10 11:27:53.573870] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:46.986 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:47.244 [2024-12-10 11:27:53.890055] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:47.244 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=83411 00:24:47.244 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:47.244 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:47.244 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 83411 /var/tmp/bdevperf.sock 00:24:47.244 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 83411 ']' 00:24:47.244 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.244 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.244 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.244 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.244 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:48.239 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.239 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:48.239 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:48.497 11:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:48.754 Nvme0n1 00:24:49.012 11:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:49.271 Nvme0n1 00:24:49.271 11:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:49.271 11:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:51.172 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:51.172 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:24:51.739 11:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:51.997 11:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:52.932 11:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:52.932 11:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:52.932 11:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.932 11:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:53.190 11:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.190 11:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:53.190 11:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.190 11:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:53.449 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:53.449 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:53.449 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.449 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:54.016 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.016 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:54.016 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:54.016 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.016 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.016 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:54.016 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.016 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:54.276 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.276 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:54.276 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.276 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:54.841 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.841 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:54.841 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:55.099 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:55.357 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:56.291 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:56.291 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:56.291 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:56.291 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.549 11:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:56.549 11:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:56.549 11:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.549 11:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:56.807 11:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.807 11:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:56.807 11:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.807 11:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:57.066 11:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.066 11:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:57.066 11:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.066 11:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:57.632 11:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.632 11:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:57.632 11:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.632 11:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:57.891 11:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.891 11:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:57.891 11:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:57.891 11:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.183 11:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.183 11:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:58.183 11:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:58.440 11:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:24:58.698 11:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:59.632 11:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:59.632 11:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:59.632 11:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.632 11:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:00.199 11:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.199 11:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:00.199 11:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.199 11:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:00.469 11:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:00.469 11:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:00.469 11:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.469 11:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:00.743 11:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.743 11:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:00.743 11:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.743 11:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:01.032 11:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.032 11:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:01.032 11:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.032 11:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:01.291 11:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.291 11:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:01.291 11:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.291 11:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:01.549 11:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.549 11:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:01.549 11:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:01.808 11:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:02.374 11:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:03.307 11:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:03.307 11:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:03.307 11:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.307 11:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:03.565 11:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.565 11:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:03.565 11:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.565 11:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:03.823 11:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:03.823 11:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:03.823 11:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.823 11:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:04.081 11:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.081 11:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:04.081 11:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.081 11:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:04.647 11:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.647 11:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:04.647 11:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:04.647 11:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.905 11:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.905 11:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:04.905 11:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.905 11:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:05.163 11:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:05.163 11:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:05.163 11:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:05.421 11:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:05.679 11:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:06.614 11:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:06.614 11:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:06.614 11:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.614 11:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:07.179 11:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:07.179 11:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:07.179 11:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:07.179 11:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.438 11:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:07.438 11:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:07.438 11:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.438 11:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:07.697 11:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.697 11:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:07.697 11:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.697 11:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:07.955 11:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.955 11:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:07.955 11:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.955 11:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:08.214 11:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.214 11:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:08.214 11:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.214 11:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:08.477 11:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.477 11:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:08.477 11:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:08.735 11:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:09.302 11:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:10.276 11:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:10.276 11:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:10.276 11:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.276 11:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:10.533 11:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.533 11:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:10.533 11:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.533 11:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:10.791 11:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.791 11:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:10.791 11:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.791 11:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:11.050 11:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.050 11:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:11.050 11:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:11.050 11:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.308 11:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.308 11:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:11.308 11:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:11.308 11:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.566 11:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:11.566 11:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:11.566 11:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:11.566 11:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.131 11:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.131 11:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:12.131 11:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:12.131 11:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:25:12.697 11:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:12.956 11:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:13.891 11:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:13.891 11:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:13.891 11:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.891 11:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:14.149 11:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.149 11:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:14.149 11:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.149 11:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:14.407 11:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.408 11:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:14.408 11:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.408 11:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:14.973 11:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.973 11:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:14.973 11:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.973 11:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:15.232 11:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.232 11:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:15.232 11:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:15.232 11:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.491 11:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.491 11:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:15.491 11:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.491 11:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:15.749 11:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.749 11:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:15.749 11:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:16.007 11:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:16.264 11:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:17.198 11:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:17.198 11:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:17.198 11:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.198 11:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:17.764 11:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:17.764 11:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:17.764 11:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.764 11:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:17.764 11:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.764 11:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:17.764 11:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.764 11:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:18.023 11:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.023 11:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:18.023 11:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.023 11:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:18.589 11:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.589 11:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:18.589 11:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:18.589 11:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.589 11:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.589 11:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:18.589 11:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.589 11:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:18.848 11:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.848 11:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:18.848 11:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:19.415 11:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:25:19.674 11:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:20.610 11:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:20.610 11:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:20.610 11:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.610 11:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:20.868 11:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.869 11:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:20.869 11:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:20.869 11:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.127 11:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.127 11:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:21.127 11:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.127 11:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:21.386 11:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.386 11:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:21.386 11:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.386 11:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:21.645 11:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.645 11:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:21.645 11:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.645 11:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:21.903 11:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.904 11:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:21.904 11:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:21.904 11:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.470 11:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.470 11:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:22.470 11:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:22.470 11:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:22.729 11:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:24.104 11:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:24.104 11:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:24.104 11:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.104 11:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:24.104 11:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.104 11:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:24.104 11:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.105 11:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:24.363 11:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:24.363 11:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:24.363 11:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:24.363 11:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.929 11:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.929 11:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:24.929 11:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.929 11:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:24.929 11:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.929 11:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:24.929 11:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.929 11:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:25.496 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.496 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:25.496 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.496 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:25.755 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:25.755 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 83411 00:25:25.755 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 83411 ']' 00:25:25.755 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 83411 00:25:25.755 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:25.755 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:25.755 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83411 00:25:25.755 killing process with pid 83411 00:25:25.755 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:25.755 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:25.755 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83411' 00:25:25.755 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 83411 00:25:25.755 11:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 83411 00:25:25.755 { 00:25:25.755 "results": [ 00:25:25.755 { 00:25:25.755 "job": "Nvme0n1", 00:25:25.755 "core_mask": "0x4", 00:25:25.755 "workload": "verify", 00:25:25.755 "status": "terminated", 00:25:25.755 "verify_range": { 00:25:25.755 "start": 0, 00:25:25.755 "length": 16384 00:25:25.755 }, 00:25:25.755 "queue_depth": 128, 00:25:25.755 "io_size": 4096, 00:25:25.755 "runtime": 36.356929, 00:25:25.755 "iops": 6804.067527265573, 00:25:25.755 "mibps": 26.578388778381143, 00:25:25.755 "io_failed": 0, 00:25:25.755 "io_timeout": 0, 00:25:25.755 "avg_latency_us": 18775.89893885801, 00:25:25.755 "min_latency_us": 170.35636363636362, 00:25:25.755 "max_latency_us": 4057035.869090909 00:25:25.755 } 00:25:25.755 ], 00:25:25.755 "core_count": 1 00:25:25.755 } 00:25:26.724 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 83411 00:25:26.725 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:26.725 [2024-12-10 11:27:54.052408] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:25:26.725 [2024-12-10 11:27:54.052569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83411 ] 00:25:26.725 [2024-12-10 11:27:54.232073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.725 [2024-12-10 11:27:54.360134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:26.725 [2024-12-10 11:27:54.541281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:26.725 Running I/O for 90 seconds... 00:25:26.725 7047.00 IOPS, 27.53 MiB/s [2024-12-10T11:28:33.551Z] 7180.50 IOPS, 28.05 MiB/s [2024-12-10T11:28:33.551Z] 7225.33 IOPS, 28.22 MiB/s [2024-12-10T11:28:33.551Z] 7250.25 IOPS, 28.32 MiB/s [2024-12-10T11:28:33.551Z] 7268.20 IOPS, 28.39 MiB/s [2024-12-10T11:28:33.551Z] 7270.83 IOPS, 28.40 MiB/s [2024-12-10T11:28:33.551Z] 7265.86 IOPS, 28.38 MiB/s [2024-12-10T11:28:33.551Z] 7264.12 IOPS, 28.38 MiB/s [2024-12-10T11:28:33.551Z] 7247.78 IOPS, 28.31 MiB/s [2024-12-10T11:28:33.551Z] 7245.60 IOPS, 28.30 MiB/s [2024-12-10T11:28:33.551Z] 7239.18 IOPS, 28.28 MiB/s [2024-12-10T11:28:33.551Z] 7229.08 IOPS, 28.24 MiB/s [2024-12-10T11:28:33.551Z] 7214.54 IOPS, 28.18 MiB/s [2024-12-10T11:28:33.551Z] 7205.50 IOPS, 28.15 MiB/s [2024-12-10T11:28:33.551Z] 7194.47 IOPS, 28.10 MiB/s [2024-12-10T11:28:33.551Z] 7181.81 IOPS, 28.05 MiB/s [2024-12-10T11:28:33.551Z] [2024-12-10 11:28:12.100026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.100115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.100191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.100224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.100266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.100289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.100321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.100344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.100396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.100432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.100483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.100519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.100557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.100580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.100611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.100634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.100687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.100739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.100776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.100800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.100839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.100863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.100895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.100918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.100948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.100970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.101023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.101076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.101128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.101181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.101234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.101287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.101340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.101414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.101491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.101545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.101599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.725 [2024-12-10 11:28:12.101653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.725 [2024-12-10 11:28:12.101707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.725 [2024-12-10 11:28:12.101774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.725 [2024-12-10 11:28:12.101832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.725 [2024-12-10 11:28:12.101886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.101916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.725 [2024-12-10 11:28:12.101939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.102002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.725 [2024-12-10 11:28:12.102025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.102056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.725 [2024-12-10 11:28:12.102080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:26.725 [2024-12-10 11:28:12.102118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.725 [2024-12-10 11:28:12.102144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.102188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.102213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.102245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.102268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.102299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.102322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.102369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.102397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.102430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.102453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.102483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.102506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.102543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.102567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.102602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.102636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.102669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.102693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.102724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.102748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.102779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.102802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.102831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.102854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.102885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.102918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.102951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.102974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.103028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.726 [2024-12-10 11:28:12.103081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.726 [2024-12-10 11:28:12.103135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.726 [2024-12-10 11:28:12.103189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.726 [2024-12-10 11:28:12.103242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.726 [2024-12-10 11:28:12.103296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.726 [2024-12-10 11:28:12.103365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.726 [2024-12-10 11:28:12.103425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.726 [2024-12-10 11:28:12.103479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.103571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.103628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.103710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.103788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.103854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.103909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.103963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.103993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.104017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.104047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.104070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.104101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.104125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.104156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.104189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.104219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.104242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.104272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.104296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.104326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.104366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.104414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.104439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.104469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.726 [2024-12-10 11:28:12.104493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:26.726 [2024-12-10 11:28:12.104523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.104547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.104577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.104601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.104632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.104655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.104700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.104730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.104775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.104801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.104832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.104855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.104886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.104909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.104940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.104963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.104994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.105017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.105071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.105134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.105196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.105250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.105304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.105373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.105433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.105513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.105569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.105623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.105681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.105742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.105795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.105880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.105936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.105967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.105990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.106044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.106097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.106170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.106230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.106290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.106343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.727 [2024-12-10 11:28:12.106417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.106471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.106525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.106579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.106649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.106703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.106757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.106812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.727 [2024-12-10 11:28:12.106871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:26.727 [2024-12-10 11:28:12.106902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.728 [2024-12-10 11:28:12.106925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.106955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.728 [2024-12-10 11:28:12.106978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.107009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.728 [2024-12-10 11:28:12.107032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.107062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.728 [2024-12-10 11:28:12.107085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.107116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.728 [2024-12-10 11:28:12.107139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.107169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.728 [2024-12-10 11:28:12.107192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.107226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.728 [2024-12-10 11:28:12.107256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.108986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.728 [2024-12-10 11:28:12.109031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.109103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.109160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.109214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.109273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.109328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.109405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.109461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.109541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.109612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.109670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.109724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.109793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.109851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.109904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.109959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.109995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.110052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.110105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.110158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.110213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.110268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.110328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.110403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.110487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.110572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.110642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.110697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.110751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.110805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.110858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.110913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.110937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.111523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.728 [2024-12-10 11:28:12.111563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:26.728 [2024-12-10 11:28:12.111603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.111640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.111688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.111734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.111768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.111791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.111823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.111846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.111892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.111917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.111948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.111973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.112050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.112102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.112157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.112211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.112264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.112318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.112392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.112448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.112503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.112582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.112655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.112711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.112765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.112818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.112871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.112924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.112955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.112991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.113034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.113059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.113090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.113113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.113143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.113166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.113212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.113237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.113267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.113290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.113320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.113369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.113406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.113430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.113460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.113483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.113512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.729 [2024-12-10 11:28:12.113535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:26.729 [2024-12-10 11:28:12.113566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.729 [2024-12-10 11:28:12.113588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.113618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.113641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.113671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.113694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.113724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.113747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.113778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.113801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.113831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.113853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.113884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.113907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.113944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.113969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.114023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.114088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.114141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.114193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.114247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.114304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.114372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.114440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.114494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.114546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.114620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.114679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.114733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.114800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.114854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.114907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.114960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.114990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.115012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.115043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.115066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.115096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.115119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.115149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.115173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.115206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.115230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.115260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.115282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.115319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.730 [2024-12-10 11:28:12.115342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.115392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.115417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.115447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.115480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.115512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.115535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.115565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.115596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.115633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.115657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.115687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.115727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.115779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.115803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.115844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.730 [2024-12-10 11:28:12.115878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:26.730 [2024-12-10 11:28:12.115910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.115934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.115964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.115988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.116042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.116095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.116149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.116221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.116296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.116366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.116427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.116487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.116542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.116595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.116650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.116703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.116756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.116809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.116863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.116917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.116959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.116983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.117056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.117111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.117167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.117220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.731 [2024-12-10 11:28:12.117273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.117326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.117399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.117454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.117507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.117570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.117626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.117699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.117762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.117816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.117870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.117923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.117952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.117975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.118006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.118029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.118060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.118083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.118113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.118136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.118166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.731 [2024-12-10 11:28:12.118193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:26.731 [2024-12-10 11:28:12.118223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:12.118245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:12.118275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:12.118299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:12.118329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:12.118393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:12.118431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:12.118455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:12.118486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:12.118510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:12.118539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:12.118562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:12.118592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:12.118615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:12.118645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:12.118668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:12.118698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:12.118720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:12.130858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:12.130908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:12.130948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:12.130974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:12.131014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:12.131049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:12.131080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:12.131118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:12.131166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:12.131195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:12.131969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:12.132013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:26.732 6770.18 IOPS, 26.45 MiB/s [2024-12-10T11:28:33.558Z] 6394.06 IOPS, 24.98 MiB/s [2024-12-10T11:28:33.558Z] 6057.53 IOPS, 23.66 MiB/s [2024-12-10T11:28:33.558Z] 5754.65 IOPS, 22.48 MiB/s [2024-12-10T11:28:33.558Z] 5802.24 IOPS, 22.66 MiB/s [2024-12-10T11:28:33.558Z] 5863.91 IOPS, 22.91 MiB/s [2024-12-10T11:28:33.558Z] 5919.91 IOPS, 23.12 MiB/s [2024-12-10T11:28:33.558Z] 6058.71 IOPS, 23.67 MiB/s [2024-12-10T11:28:33.558Z] 6193.92 IOPS, 24.20 MiB/s [2024-12-10T11:28:33.558Z] 6323.50 IOPS, 24.70 MiB/s [2024-12-10T11:28:33.558Z] 6417.89 IOPS, 25.07 MiB/s [2024-12-10T11:28:33.558Z] 6445.25 IOPS, 25.18 MiB/s [2024-12-10T11:28:33.558Z] 6454.97 IOPS, 25.21 MiB/s [2024-12-10T11:28:33.558Z] 6483.00 IOPS, 25.32 MiB/s [2024-12-10T11:28:33.558Z] 6576.77 IOPS, 25.69 MiB/s [2024-12-10T11:28:33.558Z] 6660.22 IOPS, 26.02 MiB/s [2024-12-10T11:28:33.558Z] 6745.94 IOPS, 26.35 MiB/s [2024-12-10T11:28:33.558Z] [2024-12-10 11:28:29.515639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.732 [2024-12-10 11:28:29.515776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.515856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:29.515893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.515940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:29.515972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.516014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:29.516056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.516102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.732 [2024-12-10 11:28:29.516136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.516181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:29.516214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.516261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:29.516295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.516339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.732 [2024-12-10 11:28:29.516397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.516458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.732 [2024-12-10 11:28:29.516496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.516542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.732 [2024-12-10 11:28:29.516576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.516621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:29.516688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.516737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.732 [2024-12-10 11:28:29.516773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.516818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.732 [2024-12-10 11:28:29.516852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.516910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.732 [2024-12-10 11:28:29.516947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.517001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.732 [2024-12-10 11:28:29.517038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.517085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:29.517120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.517168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:29.517208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.517262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:29.517305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.517376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:29.517418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.517469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:29.517506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.517553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.732 [2024-12-10 11:28:29.517589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.517636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.732 [2024-12-10 11:28:29.517672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:26.732 [2024-12-10 11:28:29.517720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.517777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.517828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.517866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.517914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.517953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.518004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.518043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.518092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.518130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.518179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.518217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.518268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.518305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.518382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.518424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.518475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.518512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.518561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.518598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.518648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.518685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.518733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.518770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.518820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.518874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.518927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.518965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.519014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.519050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.519103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.519140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.519189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.519226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.519278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.519315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.519385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.519427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.519480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.519517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.519566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.519603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.519651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.519687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.519752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.519791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.519839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.519875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.519924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.519961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.520039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.520089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.520139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.520176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.520227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.520266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.522817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.522918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.523014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.523060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.523120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.523163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.523220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.523261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.523318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.733 [2024-12-10 11:28:29.523404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.523470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.733 [2024-12-10 11:28:29.523513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:26.733 [2024-12-10 11:28:29.523571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.523612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.523669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.523726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.523784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.523826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.523903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.523944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.523997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.524036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.524089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.524129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.524181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.524219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.524271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.524313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.524387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.524433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.524488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.524531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.524584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.524624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.524675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.524749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.524839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.524881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.524937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.524978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.525033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.525075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.525140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.525199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.525255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.525296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.525368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.525411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.525465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.525507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.525560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.525600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.525664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.525706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.525759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.525799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.525851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.525890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.525943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.525981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.526033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.526071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.526123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.526181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.526236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.526277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.526343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.526427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.526485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.526524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.526577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.526618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.526674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.526716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.526770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.526811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.526867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.526909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.526962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.527001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.527054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.527093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.529056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.734 [2024-12-10 11:28:29.529152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.529241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.529288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.529343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.529413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.529475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.529516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:26.734 [2024-12-10 11:28:29.529568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.734 [2024-12-10 11:28:29.529640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.529699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.529745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.529797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.529836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.529889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.529929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.529980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.530025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.530078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.530124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.530177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.530226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.530280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.530318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.530388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.530436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.530489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.530534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.530588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.530633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.530729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.530785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.530842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.530880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.530957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.531005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.531058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.531102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.531154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.531193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.531242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.531285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.531338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.531405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.531465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.531510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.531563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.531607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.531663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.531723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.531784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.531843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.531899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.531943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.531998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.532069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.532130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.532177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.532264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.532313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.534451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.534530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.534611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.534658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.534717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.534761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.534821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.534864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.534923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.534966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.535020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.535074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.535129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.535199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.535258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.535313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.535393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.535446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.535510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.535555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.535633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.535685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.535757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.535827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.535884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.735 [2024-12-10 11:28:29.535928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.535983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.536023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:26.735 [2024-12-10 11:28:29.536073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.735 [2024-12-10 11:28:29.536117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.536170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.736 [2024-12-10 11:28:29.536216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.536272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.736 [2024-12-10 11:28:29.536316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.536392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.736 [2024-12-10 11:28:29.536441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.536498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.736 [2024-12-10 11:28:29.536544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.536599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.736 [2024-12-10 11:28:29.536647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.536711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.736 [2024-12-10 11:28:29.536754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.537976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.736 [2024-12-10 11:28:29.538049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.538150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.736 [2024-12-10 11:28:29.538191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.538239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.736 [2024-12-10 11:28:29.538295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.538343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.736 [2024-12-10 11:28:29.538399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.538447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.736 [2024-12-10 11:28:29.538481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.538525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.736 [2024-12-10 11:28:29.538559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.538604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.736 [2024-12-10 11:28:29.538643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.538691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.736 [2024-12-10 11:28:29.538725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.538768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.736 [2024-12-10 11:28:29.538815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.538861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.736 [2024-12-10 11:28:29.538894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.538938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.736 [2024-12-10 11:28:29.538972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.539020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.736 [2024-12-10 11:28:29.539061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.539116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.736 [2024-12-10 11:28:29.539159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.539211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.736 [2024-12-10 11:28:29.539254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.539304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.736 [2024-12-10 11:28:29.539385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.539448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.736 [2024-12-10 11:28:29.539493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.539537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.736 [2024-12-10 11:28:29.539568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.539608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.736 [2024-12-10 11:28:29.539644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.539706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.736 [2024-12-10 11:28:29.539751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.539802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.736 [2024-12-10 11:28:29.539842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:26.736 [2024-12-10 11:28:29.539887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.736 [2024-12-10 11:28:29.539920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:26.736 6782.82 IOPS, 26.50 MiB/s [2024-12-10T11:28:33.562Z] 6791.54 IOPS, 26.53 MiB/s [2024-12-10T11:28:33.562Z] 6801.97 IOPS, 26.57 MiB/s [2024-12-10T11:28:33.562Z] Received shutdown signal, test time was about 36.357877 seconds 00:25:26.736 00:25:26.736 Latency(us) 00:25:26.736 [2024-12-10T11:28:33.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.736 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:26.736 Verification LBA range: start 0x0 length 0x4000 00:25:26.736 Nvme0n1 : 36.36 6804.07 26.58 0.00 0.00 18775.90 170.36 4057035.87 00:25:26.736 [2024-12-10T11:28:33.562Z] =================================================================================================================== 00:25:26.736 [2024-12-10T11:28:33.562Z] Total : 6804.07 26.58 0.00 0.00 18775.90 170.36 4057035.87 00:25:26.736 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:26.995 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:26.995 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:26.995 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:26.995 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:26.995 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:26.995 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:26.995 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:26.995 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:26.995 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:26.995 rmmod nvme_tcp 00:25:26.995 rmmod nvme_fabrics 00:25:27.254 rmmod nvme_keyring 00:25:27.254 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:27.254 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:27.254 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:27.254 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 83351 ']' 00:25:27.254 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 83351 00:25:27.254 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 83351 ']' 00:25:27.254 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 83351 00:25:27.254 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:27.254 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.254 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83351 00:25:27.254 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:27.254 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:27.254 killing process with pid 83351 00:25:27.254 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83351' 00:25:27.254 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 83351 00:25:27.254 11:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 83351 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:25:28.631 00:25:28.631 real 0m44.951s 00:25:28.631 user 2m24.885s 00:25:28.631 sys 0m11.501s 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:28.631 ************************************ 00:25:28.631 END TEST nvmf_host_multipath_status 00:25:28.631 ************************************ 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.631 ************************************ 00:25:28.631 START TEST nvmf_discovery_remove_ifc 00:25:28.631 ************************************ 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:28.631 * Looking for test storage... 00:25:28.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:25:28.631 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:28.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.891 --rc genhtml_branch_coverage=1 00:25:28.891 --rc genhtml_function_coverage=1 00:25:28.891 --rc genhtml_legend=1 00:25:28.891 --rc geninfo_all_blocks=1 00:25:28.891 --rc geninfo_unexecuted_blocks=1 00:25:28.891 00:25:28.891 ' 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:28.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.891 --rc genhtml_branch_coverage=1 00:25:28.891 --rc genhtml_function_coverage=1 00:25:28.891 --rc genhtml_legend=1 00:25:28.891 --rc geninfo_all_blocks=1 00:25:28.891 --rc geninfo_unexecuted_blocks=1 00:25:28.891 00:25:28.891 ' 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:28.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.891 --rc genhtml_branch_coverage=1 00:25:28.891 --rc genhtml_function_coverage=1 00:25:28.891 --rc genhtml_legend=1 00:25:28.891 --rc geninfo_all_blocks=1 00:25:28.891 --rc geninfo_unexecuted_blocks=1 00:25:28.891 00:25:28.891 ' 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:28.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.891 --rc genhtml_branch_coverage=1 00:25:28.891 --rc genhtml_function_coverage=1 00:25:28.891 --rc genhtml_legend=1 00:25:28.891 --rc geninfo_all_blocks=1 00:25:28.891 --rc geninfo_unexecuted_blocks=1 00:25:28.891 00:25:28.891 ' 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.891 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:28.892 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:28.892 Cannot find device "nvmf_init_br" 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:28.892 Cannot find device "nvmf_init_br2" 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:28.892 Cannot find device "nvmf_tgt_br" 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:28.892 Cannot find device "nvmf_tgt_br2" 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:28.892 Cannot find device "nvmf_init_br" 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:28.892 Cannot find device "nvmf_init_br2" 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:28.892 Cannot find device "nvmf_tgt_br" 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:28.892 Cannot find device "nvmf_tgt_br2" 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:28.892 Cannot find device "nvmf_br" 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:28.892 Cannot find device "nvmf_init_if" 00:25:28.892 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:25:28.893 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:28.893 Cannot find device "nvmf_init_if2" 00:25:28.893 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:25:28.893 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:28.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:28.893 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:25:28.893 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:28.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:28.893 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:25:28.893 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:28.893 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:28.893 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:29.151 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:29.151 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:29.151 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:29.151 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:29.151 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:29.151 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:29.151 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:29.151 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:29.151 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:29.152 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:29.152 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:25:29.152 00:25:29.152 --- 10.0.0.3 ping statistics --- 00:25:29.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.152 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:29.152 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:29.152 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:25:29.152 00:25:29.152 --- 10.0.0.4 ping statistics --- 00:25:29.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.152 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:29.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:25:29.152 00:25:29.152 --- 10.0.0.1 ping statistics --- 00:25:29.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.152 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:29.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:25:29.152 00:25:29.152 --- 10.0.0.2 ping statistics --- 00:25:29.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.152 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=84285 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 84285 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 84285 ']' 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:29.152 11:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:29.410 [2024-12-10 11:28:36.095841] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:25:29.410 [2024-12-10 11:28:36.096001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.669 [2024-12-10 11:28:36.283807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.669 [2024-12-10 11:28:36.431737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.669 [2024-12-10 11:28:36.431799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.669 [2024-12-10 11:28:36.431821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.669 [2024-12-10 11:28:36.431848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.669 [2024-12-10 11:28:36.431865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.669 [2024-12-10 11:28:36.433300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.927 [2024-12-10 11:28:36.648356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.496 [2024-12-10 11:28:37.164147] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.496 [2024-12-10 11:28:37.172304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:25:30.496 null0 00:25:30.496 [2024-12-10 11:28:37.204236] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=84317 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 84317 /tmp/host.sock 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 84317 ']' 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.496 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.496 11:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.755 [2024-12-10 11:28:37.393910] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:25:30.755 [2024-12-10 11:28:37.394101] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84317 ] 00:25:30.755 [2024-12-10 11:28:37.579657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.013 [2024-12-10 11:28:37.706101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.578 11:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.579 11:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:31.579 11:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:31.579 11:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:31.579 11:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.579 11:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:31.579 11:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.579 11:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:31.579 11:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.579 11:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:31.837 [2024-12-10 11:28:38.547507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:31.837 11:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.837 11:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:31.837 11:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.837 11:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.212 [2024-12-10 11:28:39.668969] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:25:33.212 [2024-12-10 11:28:39.669042] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:25:33.212 [2024-12-10 11:28:39.669079] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:33.212 [2024-12-10 11:28:39.675070] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:25:33.212 [2024-12-10 11:28:39.737744] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:25:33.212 [2024-12-10 11:28:39.739354] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:25:33.212 [2024-12-10 11:28:39.741653] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:33.212 [2024-12-10 11:28:39.741757] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:33.212 [2024-12-10 11:28:39.741826] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:33.212 [2024-12-10 11:28:39.741857] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:25:33.212 [2024-12-10 11:28:39.741901] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.212 [2024-12-10 11:28:39.748035] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:33.212 11:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:34.147 11:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:34.147 11:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.147 11:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.147 11:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:34.147 11:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.147 11:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:34.147 11:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:34.147 11:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.147 11:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:34.147 11:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:35.109 11:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:35.367 11:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.367 11:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.367 11:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:35.367 11:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:35.367 11:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:35.367 11:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:35.367 11:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.367 11:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:35.367 11:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:36.303 11:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:36.303 11:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.303 11:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.303 11:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:36.303 11:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:36.303 11:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:36.303 11:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:36.303 11:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.303 11:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:36.303 11:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:37.240 11:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:37.240 11:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:37.240 11:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.240 11:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.240 11:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:37.240 11:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.240 11:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:37.499 11:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.499 11:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:37.499 11:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:38.434 11:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:38.434 11:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.434 11:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.434 11:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:38.434 11:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:38.434 11:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:38.434 11:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:38.434 11:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.434 11:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:38.434 11:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:38.434 [2024-12-10 11:28:45.169001] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:38.434 [2024-12-10 11:28:45.169121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.434 [2024-12-10 11:28:45.169145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.434 [2024-12-10 11:28:45.169165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.434 [2024-12-10 11:28:45.169178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.434 [2024-12-10 11:28:45.169192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.434 [2024-12-10 11:28:45.169205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.434 [2024-12-10 11:28:45.169219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.434 [2024-12-10 11:28:45.169232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.434 [2024-12-10 11:28:45.169246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.434 [2024-12-10 11:28:45.169259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.434 [2024-12-10 11:28:45.169272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:25:38.434 [2024-12-10 11:28:45.178979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:25:38.434 [2024-12-10 11:28:45.189005] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:38.434 [2024-12-10 11:28:45.189060] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:38.434 [2024-12-10 11:28:45.189073] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:38.434 [2024-12-10 11:28:45.189083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:38.434 [2024-12-10 11:28:45.189172] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:39.369 11:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:39.369 11:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.369 11:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:39.369 11:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.369 11:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:39.369 11:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:39.369 11:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:39.628 [2024-12-10 11:28:46.230492] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:25:39.628 [2024-12-10 11:28:46.230658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:25:39.628 [2024-12-10 11:28:46.230709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:25:39.628 [2024-12-10 11:28:46.230866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:25:39.628 [2024-12-10 11:28:46.232159] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:39.628 [2024-12-10 11:28:46.232288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:39.628 [2024-12-10 11:28:46.232336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:39.628 [2024-12-10 11:28:46.232396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:39.628 [2024-12-10 11:28:46.232429] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:39.628 [2024-12-10 11:28:46.232453] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:39.628 [2024-12-10 11:28:46.232471] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:39.628 [2024-12-10 11:28:46.232500] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:39.628 [2024-12-10 11:28:46.232530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:39.628 11:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.628 11:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:39.628 11:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:40.563 [2024-12-10 11:28:47.232646] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:40.563 [2024-12-10 11:28:47.232735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:40.563 [2024-12-10 11:28:47.232775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:40.563 [2024-12-10 11:28:47.232792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:40.563 [2024-12-10 11:28:47.232806] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:40.563 [2024-12-10 11:28:47.232819] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:40.563 [2024-12-10 11:28:47.232830] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:40.563 [2024-12-10 11:28:47.232838] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:40.563 [2024-12-10 11:28:47.232905] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:25:40.563 [2024-12-10 11:28:47.232975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.563 [2024-12-10 11:28:47.232997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.563 [2024-12-10 11:28:47.233041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.563 [2024-12-10 11:28:47.233055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.563 [2024-12-10 11:28:47.233069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.563 [2024-12-10 11:28:47.233081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.563 [2024-12-10 11:28:47.233094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.563 [2024-12-10 11:28:47.233107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.563 [2024-12-10 11:28:47.233121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.563 [2024-12-10 11:28:47.233151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.563 [2024-12-10 11:28:47.233165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:40.563 [2024-12-10 11:28:47.233236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:25:40.563 [2024-12-10 11:28:47.234228] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:40.563 [2024-12-10 11:28:47.234266] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:40.563 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:40.563 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:40.563 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.563 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:40.563 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:40.563 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.563 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.564 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.564 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:40.564 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:40.564 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:40.564 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:40.564 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:40.564 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:40.564 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.564 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.564 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:40.564 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.564 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:40.564 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.564 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:40.564 11:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:41.941 11:28:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:41.941 11:28:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.941 11:28:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.941 11:28:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.941 11:28:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:41.941 11:28:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:41.941 11:28:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:41.941 11:28:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.941 11:28:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:41.941 11:28:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:42.508 [2024-12-10 11:28:49.245330] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:25:42.508 [2024-12-10 11:28:49.245405] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:25:42.508 [2024-12-10 11:28:49.245442] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:42.508 [2024-12-10 11:28:49.251419] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:25:42.508 [2024-12-10 11:28:49.314095] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:25:42.508 [2024-12-10 11:28:49.315534] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x61500002c180:1 started. 00:25:42.508 [2024-12-10 11:28:49.317804] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:42.508 [2024-12-10 11:28:49.317887] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:42.508 [2024-12-10 11:28:49.317947] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:42.508 [2024-12-10 11:28:49.317975] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:25:42.508 [2024-12-10 11:28:49.317993] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:25:42.508 [2024-12-10 11:28:49.324412] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x61500002c180 was disconnected and freed. delete nvme_qpair. 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 84317 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 84317 ']' 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 84317 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84317 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:42.774 killing process with pid 84317 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84317' 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 84317 00:25:42.774 11:28:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 84317 00:25:43.716 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:43.716 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:43.716 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:43.975 rmmod nvme_tcp 00:25:43.975 rmmod nvme_fabrics 00:25:43.975 rmmod nvme_keyring 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 84285 ']' 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 84285 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 84285 ']' 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 84285 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84285 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:43.975 killing process with pid 84285 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84285' 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 84285 00:25:43.975 11:28:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 84285 00:25:44.911 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:44.911 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:44.911 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:44.911 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:44.911 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:44.911 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:44.911 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:44.911 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:44.911 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:44.911 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:44.911 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:44.911 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:25:45.170 00:25:45.170 real 0m16.584s 00:25:45.170 user 0m27.992s 00:25:45.170 sys 0m2.569s 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:45.170 ************************************ 00:25:45.170 END TEST nvmf_discovery_remove_ifc 00:25:45.170 ************************************ 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:45.170 11:28:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:45.171 11:28:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:45.171 11:28:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.171 ************************************ 00:25:45.171 START TEST nvmf_identify_kernel_target 00:25:45.171 ************************************ 00:25:45.171 11:28:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:45.431 * Looking for test storage... 00:25:45.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:45.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.431 --rc genhtml_branch_coverage=1 00:25:45.431 --rc genhtml_function_coverage=1 00:25:45.431 --rc genhtml_legend=1 00:25:45.431 --rc geninfo_all_blocks=1 00:25:45.431 --rc geninfo_unexecuted_blocks=1 00:25:45.431 00:25:45.431 ' 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:45.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.431 --rc genhtml_branch_coverage=1 00:25:45.431 --rc genhtml_function_coverage=1 00:25:45.431 --rc genhtml_legend=1 00:25:45.431 --rc geninfo_all_blocks=1 00:25:45.431 --rc geninfo_unexecuted_blocks=1 00:25:45.431 00:25:45.431 ' 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:45.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.431 --rc genhtml_branch_coverage=1 00:25:45.431 --rc genhtml_function_coverage=1 00:25:45.431 --rc genhtml_legend=1 00:25:45.431 --rc geninfo_all_blocks=1 00:25:45.431 --rc geninfo_unexecuted_blocks=1 00:25:45.431 00:25:45.431 ' 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:45.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.431 --rc genhtml_branch_coverage=1 00:25:45.431 --rc genhtml_function_coverage=1 00:25:45.431 --rc genhtml_legend=1 00:25:45.431 --rc geninfo_all_blocks=1 00:25:45.431 --rc geninfo_unexecuted_blocks=1 00:25:45.431 00:25:45.431 ' 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.431 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:45.432 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:45.432 Cannot find device "nvmf_init_br" 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:45.432 Cannot find device "nvmf_init_br2" 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:45.432 Cannot find device "nvmf_tgt_br" 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:45.432 Cannot find device "nvmf_tgt_br2" 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:45.432 Cannot find device "nvmf_init_br" 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:25:45.432 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:45.691 Cannot find device "nvmf_init_br2" 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:45.691 Cannot find device "nvmf_tgt_br" 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:45.691 Cannot find device "nvmf_tgt_br2" 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:45.691 Cannot find device "nvmf_br" 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:45.691 Cannot find device "nvmf_init_if" 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:45.691 Cannot find device "nvmf_init_if2" 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:45.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:45.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:45.691 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:45.950 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:45.950 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:25:45.950 00:25:45.950 --- 10.0.0.3 ping statistics --- 00:25:45.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.950 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:45.950 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:45.950 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:25:45.950 00:25:45.950 --- 10.0.0.4 ping statistics --- 00:25:45.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.950 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:45.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:25:45.950 00:25:45.950 --- 10.0.0.1 ping statistics --- 00:25:45.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.950 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:45.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:25:45.950 00:25:45.950 --- 10.0.0.2 ping statistics --- 00:25:45.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.950 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:45.950 11:28:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:46.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:46.209 Waiting for block devices as requested 00:25:46.209 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:46.468 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:46.468 No valid GPT data, bailing 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:25:46.468 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:46.727 No valid GPT data, bailing 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:46.727 No valid GPT data, bailing 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:46.727 No valid GPT data, bailing 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -a 10.0.0.1 -t tcp -s 4420 00:25:46.727 00:25:46.727 Discovery Log Number of Records 2, Generation counter 2 00:25:46.727 =====Discovery Log Entry 0====== 00:25:46.727 trtype: tcp 00:25:46.727 adrfam: ipv4 00:25:46.727 subtype: current discovery subsystem 00:25:46.727 treq: not specified, sq flow control disable supported 00:25:46.727 portid: 1 00:25:46.727 trsvcid: 4420 00:25:46.727 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:46.727 traddr: 10.0.0.1 00:25:46.727 eflags: none 00:25:46.727 sectype: none 00:25:46.727 =====Discovery Log Entry 1====== 00:25:46.727 trtype: tcp 00:25:46.727 adrfam: ipv4 00:25:46.727 subtype: nvme subsystem 00:25:46.727 treq: not specified, sq flow control disable supported 00:25:46.727 portid: 1 00:25:46.727 trsvcid: 4420 00:25:46.727 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:46.727 traddr: 10.0.0.1 00:25:46.727 eflags: none 00:25:46.727 sectype: none 00:25:46.727 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:46.727 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:46.987 ===================================================== 00:25:46.987 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:46.987 ===================================================== 00:25:46.987 Controller Capabilities/Features 00:25:46.987 ================================ 00:25:46.987 Vendor ID: 0000 00:25:46.987 Subsystem Vendor ID: 0000 00:25:46.987 Serial Number: 82cc0d6aa7513e183812 00:25:46.987 Model Number: Linux 00:25:46.987 Firmware Version: 6.8.9-20 00:25:46.987 Recommended Arb Burst: 0 00:25:46.987 IEEE OUI Identifier: 00 00 00 00:25:46.987 Multi-path I/O 00:25:46.987 May have multiple subsystem ports: No 00:25:46.987 May have multiple controllers: No 00:25:46.987 Associated with SR-IOV VF: No 00:25:46.987 Max Data Transfer Size: Unlimited 00:25:46.987 Max Number of Namespaces: 0 00:25:46.987 Max Number of I/O Queues: 1024 00:25:46.987 NVMe Specification Version (VS): 1.3 00:25:46.987 NVMe Specification Version (Identify): 1.3 00:25:46.987 Maximum Queue Entries: 1024 00:25:46.987 Contiguous Queues Required: No 00:25:46.987 Arbitration Mechanisms Supported 00:25:46.987 Weighted Round Robin: Not Supported 00:25:46.987 Vendor Specific: Not Supported 00:25:46.987 Reset Timeout: 7500 ms 00:25:46.987 Doorbell Stride: 4 bytes 00:25:46.987 NVM Subsystem Reset: Not Supported 00:25:46.987 Command Sets Supported 00:25:46.987 NVM Command Set: Supported 00:25:46.987 Boot Partition: Not Supported 00:25:46.987 Memory Page Size Minimum: 4096 bytes 00:25:46.987 Memory Page Size Maximum: 4096 bytes 00:25:46.987 Persistent Memory Region: Not Supported 00:25:46.987 Optional Asynchronous Events Supported 00:25:46.987 Namespace Attribute Notices: Not Supported 00:25:46.987 Firmware Activation Notices: Not Supported 00:25:46.987 ANA Change Notices: Not Supported 00:25:46.987 PLE Aggregate Log Change Notices: Not Supported 00:25:46.987 LBA Status Info Alert Notices: Not Supported 00:25:46.987 EGE Aggregate Log Change Notices: Not Supported 00:25:46.987 Normal NVM Subsystem Shutdown event: Not Supported 00:25:46.987 Zone Descriptor Change Notices: Not Supported 00:25:46.987 Discovery Log Change Notices: Supported 00:25:46.987 Controller Attributes 00:25:46.987 128-bit Host Identifier: Not Supported 00:25:46.987 Non-Operational Permissive Mode: Not Supported 00:25:46.987 NVM Sets: Not Supported 00:25:46.987 Read Recovery Levels: Not Supported 00:25:46.987 Endurance Groups: Not Supported 00:25:46.987 Predictable Latency Mode: Not Supported 00:25:46.987 Traffic Based Keep ALive: Not Supported 00:25:46.987 Namespace Granularity: Not Supported 00:25:46.987 SQ Associations: Not Supported 00:25:46.987 UUID List: Not Supported 00:25:46.987 Multi-Domain Subsystem: Not Supported 00:25:46.987 Fixed Capacity Management: Not Supported 00:25:46.987 Variable Capacity Management: Not Supported 00:25:46.987 Delete Endurance Group: Not Supported 00:25:46.987 Delete NVM Set: Not Supported 00:25:46.987 Extended LBA Formats Supported: Not Supported 00:25:46.987 Flexible Data Placement Supported: Not Supported 00:25:46.987 00:25:46.987 Controller Memory Buffer Support 00:25:46.987 ================================ 00:25:46.987 Supported: No 00:25:46.987 00:25:46.987 Persistent Memory Region Support 00:25:46.987 ================================ 00:25:46.987 Supported: No 00:25:46.987 00:25:46.987 Admin Command Set Attributes 00:25:46.987 ============================ 00:25:46.987 Security Send/Receive: Not Supported 00:25:46.987 Format NVM: Not Supported 00:25:46.987 Firmware Activate/Download: Not Supported 00:25:46.987 Namespace Management: Not Supported 00:25:46.987 Device Self-Test: Not Supported 00:25:46.987 Directives: Not Supported 00:25:46.987 NVMe-MI: Not Supported 00:25:46.987 Virtualization Management: Not Supported 00:25:46.987 Doorbell Buffer Config: Not Supported 00:25:46.987 Get LBA Status Capability: Not Supported 00:25:46.987 Command & Feature Lockdown Capability: Not Supported 00:25:46.987 Abort Command Limit: 1 00:25:46.987 Async Event Request Limit: 1 00:25:46.987 Number of Firmware Slots: N/A 00:25:46.987 Firmware Slot 1 Read-Only: N/A 00:25:46.987 Firmware Activation Without Reset: N/A 00:25:46.987 Multiple Update Detection Support: N/A 00:25:46.987 Firmware Update Granularity: No Information Provided 00:25:46.987 Per-Namespace SMART Log: No 00:25:46.987 Asymmetric Namespace Access Log Page: Not Supported 00:25:46.987 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:46.987 Command Effects Log Page: Not Supported 00:25:46.987 Get Log Page Extended Data: Supported 00:25:46.987 Telemetry Log Pages: Not Supported 00:25:46.987 Persistent Event Log Pages: Not Supported 00:25:46.987 Supported Log Pages Log Page: May Support 00:25:46.987 Commands Supported & Effects Log Page: Not Supported 00:25:46.987 Feature Identifiers & Effects Log Page:May Support 00:25:46.987 NVMe-MI Commands & Effects Log Page: May Support 00:25:46.987 Data Area 4 for Telemetry Log: Not Supported 00:25:46.987 Error Log Page Entries Supported: 1 00:25:46.987 Keep Alive: Not Supported 00:25:46.987 00:25:46.987 NVM Command Set Attributes 00:25:46.987 ========================== 00:25:46.987 Submission Queue Entry Size 00:25:46.987 Max: 1 00:25:46.987 Min: 1 00:25:46.987 Completion Queue Entry Size 00:25:46.987 Max: 1 00:25:46.987 Min: 1 00:25:46.987 Number of Namespaces: 0 00:25:46.987 Compare Command: Not Supported 00:25:46.987 Write Uncorrectable Command: Not Supported 00:25:46.987 Dataset Management Command: Not Supported 00:25:46.987 Write Zeroes Command: Not Supported 00:25:46.987 Set Features Save Field: Not Supported 00:25:46.987 Reservations: Not Supported 00:25:46.987 Timestamp: Not Supported 00:25:46.987 Copy: Not Supported 00:25:46.987 Volatile Write Cache: Not Present 00:25:46.987 Atomic Write Unit (Normal): 1 00:25:46.987 Atomic Write Unit (PFail): 1 00:25:46.987 Atomic Compare & Write Unit: 1 00:25:46.987 Fused Compare & Write: Not Supported 00:25:46.987 Scatter-Gather List 00:25:46.987 SGL Command Set: Supported 00:25:46.987 SGL Keyed: Not Supported 00:25:46.987 SGL Bit Bucket Descriptor: Not Supported 00:25:46.987 SGL Metadata Pointer: Not Supported 00:25:46.987 Oversized SGL: Not Supported 00:25:46.987 SGL Metadata Address: Not Supported 00:25:46.987 SGL Offset: Supported 00:25:46.987 Transport SGL Data Block: Not Supported 00:25:46.987 Replay Protected Memory Block: Not Supported 00:25:46.987 00:25:46.987 Firmware Slot Information 00:25:46.987 ========================= 00:25:46.987 Active slot: 0 00:25:46.987 00:25:46.987 00:25:46.987 Error Log 00:25:46.987 ========= 00:25:46.987 00:25:46.987 Active Namespaces 00:25:46.987 ================= 00:25:46.987 Discovery Log Page 00:25:46.987 ================== 00:25:46.987 Generation Counter: 2 00:25:46.987 Number of Records: 2 00:25:46.987 Record Format: 0 00:25:46.987 00:25:46.987 Discovery Log Entry 0 00:25:46.987 ---------------------- 00:25:46.987 Transport Type: 3 (TCP) 00:25:46.987 Address Family: 1 (IPv4) 00:25:46.987 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:46.987 Entry Flags: 00:25:46.987 Duplicate Returned Information: 0 00:25:46.987 Explicit Persistent Connection Support for Discovery: 0 00:25:46.987 Transport Requirements: 00:25:46.987 Secure Channel: Not Specified 00:25:46.987 Port ID: 1 (0x0001) 00:25:46.987 Controller ID: 65535 (0xffff) 00:25:46.987 Admin Max SQ Size: 32 00:25:46.987 Transport Service Identifier: 4420 00:25:46.987 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:46.987 Transport Address: 10.0.0.1 00:25:46.987 Discovery Log Entry 1 00:25:46.987 ---------------------- 00:25:46.987 Transport Type: 3 (TCP) 00:25:46.987 Address Family: 1 (IPv4) 00:25:46.987 Subsystem Type: 2 (NVM Subsystem) 00:25:46.987 Entry Flags: 00:25:46.987 Duplicate Returned Information: 0 00:25:46.987 Explicit Persistent Connection Support for Discovery: 0 00:25:46.987 Transport Requirements: 00:25:46.987 Secure Channel: Not Specified 00:25:46.987 Port ID: 1 (0x0001) 00:25:46.987 Controller ID: 65535 (0xffff) 00:25:46.987 Admin Max SQ Size: 32 00:25:46.987 Transport Service Identifier: 4420 00:25:46.987 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:46.987 Transport Address: 10.0.0.1 00:25:46.987 11:28:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:47.247 get_feature(0x01) failed 00:25:47.247 get_feature(0x02) failed 00:25:47.247 get_feature(0x04) failed 00:25:47.247 ===================================================== 00:25:47.247 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:47.247 ===================================================== 00:25:47.247 Controller Capabilities/Features 00:25:47.247 ================================ 00:25:47.247 Vendor ID: 0000 00:25:47.247 Subsystem Vendor ID: 0000 00:25:47.247 Serial Number: 97813f18b37292201d84 00:25:47.247 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:47.247 Firmware Version: 6.8.9-20 00:25:47.247 Recommended Arb Burst: 6 00:25:47.247 IEEE OUI Identifier: 00 00 00 00:25:47.247 Multi-path I/O 00:25:47.247 May have multiple subsystem ports: Yes 00:25:47.247 May have multiple controllers: Yes 00:25:47.247 Associated with SR-IOV VF: No 00:25:47.247 Max Data Transfer Size: Unlimited 00:25:47.247 Max Number of Namespaces: 1024 00:25:47.247 Max Number of I/O Queues: 128 00:25:47.247 NVMe Specification Version (VS): 1.3 00:25:47.247 NVMe Specification Version (Identify): 1.3 00:25:47.247 Maximum Queue Entries: 1024 00:25:47.247 Contiguous Queues Required: No 00:25:47.247 Arbitration Mechanisms Supported 00:25:47.247 Weighted Round Robin: Not Supported 00:25:47.247 Vendor Specific: Not Supported 00:25:47.247 Reset Timeout: 7500 ms 00:25:47.247 Doorbell Stride: 4 bytes 00:25:47.247 NVM Subsystem Reset: Not Supported 00:25:47.247 Command Sets Supported 00:25:47.247 NVM Command Set: Supported 00:25:47.247 Boot Partition: Not Supported 00:25:47.247 Memory Page Size Minimum: 4096 bytes 00:25:47.247 Memory Page Size Maximum: 4096 bytes 00:25:47.247 Persistent Memory Region: Not Supported 00:25:47.247 Optional Asynchronous Events Supported 00:25:47.247 Namespace Attribute Notices: Supported 00:25:47.247 Firmware Activation Notices: Not Supported 00:25:47.247 ANA Change Notices: Supported 00:25:47.247 PLE Aggregate Log Change Notices: Not Supported 00:25:47.247 LBA Status Info Alert Notices: Not Supported 00:25:47.247 EGE Aggregate Log Change Notices: Not Supported 00:25:47.247 Normal NVM Subsystem Shutdown event: Not Supported 00:25:47.247 Zone Descriptor Change Notices: Not Supported 00:25:47.247 Discovery Log Change Notices: Not Supported 00:25:47.247 Controller Attributes 00:25:47.247 128-bit Host Identifier: Supported 00:25:47.247 Non-Operational Permissive Mode: Not Supported 00:25:47.247 NVM Sets: Not Supported 00:25:47.247 Read Recovery Levels: Not Supported 00:25:47.247 Endurance Groups: Not Supported 00:25:47.247 Predictable Latency Mode: Not Supported 00:25:47.247 Traffic Based Keep ALive: Supported 00:25:47.247 Namespace Granularity: Not Supported 00:25:47.247 SQ Associations: Not Supported 00:25:47.247 UUID List: Not Supported 00:25:47.247 Multi-Domain Subsystem: Not Supported 00:25:47.247 Fixed Capacity Management: Not Supported 00:25:47.247 Variable Capacity Management: Not Supported 00:25:47.247 Delete Endurance Group: Not Supported 00:25:47.247 Delete NVM Set: Not Supported 00:25:47.247 Extended LBA Formats Supported: Not Supported 00:25:47.247 Flexible Data Placement Supported: Not Supported 00:25:47.247 00:25:47.247 Controller Memory Buffer Support 00:25:47.247 ================================ 00:25:47.247 Supported: No 00:25:47.247 00:25:47.247 Persistent Memory Region Support 00:25:47.247 ================================ 00:25:47.247 Supported: No 00:25:47.247 00:25:47.247 Admin Command Set Attributes 00:25:47.247 ============================ 00:25:47.247 Security Send/Receive: Not Supported 00:25:47.247 Format NVM: Not Supported 00:25:47.247 Firmware Activate/Download: Not Supported 00:25:47.247 Namespace Management: Not Supported 00:25:47.247 Device Self-Test: Not Supported 00:25:47.247 Directives: Not Supported 00:25:47.247 NVMe-MI: Not Supported 00:25:47.247 Virtualization Management: Not Supported 00:25:47.247 Doorbell Buffer Config: Not Supported 00:25:47.247 Get LBA Status Capability: Not Supported 00:25:47.247 Command & Feature Lockdown Capability: Not Supported 00:25:47.247 Abort Command Limit: 4 00:25:47.247 Async Event Request Limit: 4 00:25:47.247 Number of Firmware Slots: N/A 00:25:47.247 Firmware Slot 1 Read-Only: N/A 00:25:47.247 Firmware Activation Without Reset: N/A 00:25:47.247 Multiple Update Detection Support: N/A 00:25:47.247 Firmware Update Granularity: No Information Provided 00:25:47.247 Per-Namespace SMART Log: Yes 00:25:47.248 Asymmetric Namespace Access Log Page: Supported 00:25:47.248 ANA Transition Time : 10 sec 00:25:47.248 00:25:47.248 Asymmetric Namespace Access Capabilities 00:25:47.248 ANA Optimized State : Supported 00:25:47.248 ANA Non-Optimized State : Supported 00:25:47.248 ANA Inaccessible State : Supported 00:25:47.248 ANA Persistent Loss State : Supported 00:25:47.248 ANA Change State : Supported 00:25:47.248 ANAGRPID is not changed : No 00:25:47.248 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:47.248 00:25:47.248 ANA Group Identifier Maximum : 128 00:25:47.248 Number of ANA Group Identifiers : 128 00:25:47.248 Max Number of Allowed Namespaces : 1024 00:25:47.248 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:47.248 Command Effects Log Page: Supported 00:25:47.248 Get Log Page Extended Data: Supported 00:25:47.248 Telemetry Log Pages: Not Supported 00:25:47.248 Persistent Event Log Pages: Not Supported 00:25:47.248 Supported Log Pages Log Page: May Support 00:25:47.248 Commands Supported & Effects Log Page: Not Supported 00:25:47.248 Feature Identifiers & Effects Log Page:May Support 00:25:47.248 NVMe-MI Commands & Effects Log Page: May Support 00:25:47.248 Data Area 4 for Telemetry Log: Not Supported 00:25:47.248 Error Log Page Entries Supported: 128 00:25:47.248 Keep Alive: Supported 00:25:47.248 Keep Alive Granularity: 1000 ms 00:25:47.248 00:25:47.248 NVM Command Set Attributes 00:25:47.248 ========================== 00:25:47.248 Submission Queue Entry Size 00:25:47.248 Max: 64 00:25:47.248 Min: 64 00:25:47.248 Completion Queue Entry Size 00:25:47.248 Max: 16 00:25:47.248 Min: 16 00:25:47.248 Number of Namespaces: 1024 00:25:47.248 Compare Command: Not Supported 00:25:47.248 Write Uncorrectable Command: Not Supported 00:25:47.248 Dataset Management Command: Supported 00:25:47.248 Write Zeroes Command: Supported 00:25:47.248 Set Features Save Field: Not Supported 00:25:47.248 Reservations: Not Supported 00:25:47.248 Timestamp: Not Supported 00:25:47.248 Copy: Not Supported 00:25:47.248 Volatile Write Cache: Present 00:25:47.248 Atomic Write Unit (Normal): 1 00:25:47.248 Atomic Write Unit (PFail): 1 00:25:47.248 Atomic Compare & Write Unit: 1 00:25:47.248 Fused Compare & Write: Not Supported 00:25:47.248 Scatter-Gather List 00:25:47.248 SGL Command Set: Supported 00:25:47.248 SGL Keyed: Not Supported 00:25:47.248 SGL Bit Bucket Descriptor: Not Supported 00:25:47.248 SGL Metadata Pointer: Not Supported 00:25:47.248 Oversized SGL: Not Supported 00:25:47.248 SGL Metadata Address: Not Supported 00:25:47.248 SGL Offset: Supported 00:25:47.248 Transport SGL Data Block: Not Supported 00:25:47.248 Replay Protected Memory Block: Not Supported 00:25:47.248 00:25:47.248 Firmware Slot Information 00:25:47.248 ========================= 00:25:47.248 Active slot: 0 00:25:47.248 00:25:47.248 Asymmetric Namespace Access 00:25:47.248 =========================== 00:25:47.248 Change Count : 0 00:25:47.248 Number of ANA Group Descriptors : 1 00:25:47.248 ANA Group Descriptor : 0 00:25:47.248 ANA Group ID : 1 00:25:47.248 Number of NSID Values : 1 00:25:47.248 Change Count : 0 00:25:47.248 ANA State : 1 00:25:47.248 Namespace Identifier : 1 00:25:47.248 00:25:47.248 Commands Supported and Effects 00:25:47.248 ============================== 00:25:47.248 Admin Commands 00:25:47.248 -------------- 00:25:47.248 Get Log Page (02h): Supported 00:25:47.248 Identify (06h): Supported 00:25:47.248 Abort (08h): Supported 00:25:47.248 Set Features (09h): Supported 00:25:47.248 Get Features (0Ah): Supported 00:25:47.248 Asynchronous Event Request (0Ch): Supported 00:25:47.248 Keep Alive (18h): Supported 00:25:47.248 I/O Commands 00:25:47.248 ------------ 00:25:47.248 Flush (00h): Supported 00:25:47.248 Write (01h): Supported LBA-Change 00:25:47.248 Read (02h): Supported 00:25:47.248 Write Zeroes (08h): Supported LBA-Change 00:25:47.248 Dataset Management (09h): Supported 00:25:47.248 00:25:47.248 Error Log 00:25:47.248 ========= 00:25:47.248 Entry: 0 00:25:47.248 Error Count: 0x3 00:25:47.248 Submission Queue Id: 0x0 00:25:47.248 Command Id: 0x5 00:25:47.248 Phase Bit: 0 00:25:47.248 Status Code: 0x2 00:25:47.248 Status Code Type: 0x0 00:25:47.248 Do Not Retry: 1 00:25:47.507 Error Location: 0x28 00:25:47.507 LBA: 0x0 00:25:47.507 Namespace: 0x0 00:25:47.507 Vendor Log Page: 0x0 00:25:47.507 ----------- 00:25:47.507 Entry: 1 00:25:47.507 Error Count: 0x2 00:25:47.507 Submission Queue Id: 0x0 00:25:47.507 Command Id: 0x5 00:25:47.507 Phase Bit: 0 00:25:47.507 Status Code: 0x2 00:25:47.507 Status Code Type: 0x0 00:25:47.507 Do Not Retry: 1 00:25:47.507 Error Location: 0x28 00:25:47.507 LBA: 0x0 00:25:47.507 Namespace: 0x0 00:25:47.507 Vendor Log Page: 0x0 00:25:47.507 ----------- 00:25:47.507 Entry: 2 00:25:47.507 Error Count: 0x1 00:25:47.507 Submission Queue Id: 0x0 00:25:47.507 Command Id: 0x4 00:25:47.507 Phase Bit: 0 00:25:47.507 Status Code: 0x2 00:25:47.507 Status Code Type: 0x0 00:25:47.507 Do Not Retry: 1 00:25:47.507 Error Location: 0x28 00:25:47.507 LBA: 0x0 00:25:47.507 Namespace: 0x0 00:25:47.507 Vendor Log Page: 0x0 00:25:47.507 00:25:47.507 Number of Queues 00:25:47.507 ================ 00:25:47.507 Number of I/O Submission Queues: 128 00:25:47.507 Number of I/O Completion Queues: 128 00:25:47.507 00:25:47.507 ZNS Specific Controller Data 00:25:47.507 ============================ 00:25:47.507 Zone Append Size Limit: 0 00:25:47.507 00:25:47.507 00:25:47.507 Active Namespaces 00:25:47.507 ================= 00:25:47.507 get_feature(0x05) failed 00:25:47.507 Namespace ID:1 00:25:47.507 Command Set Identifier: NVM (00h) 00:25:47.507 Deallocate: Supported 00:25:47.507 Deallocated/Unwritten Error: Not Supported 00:25:47.507 Deallocated Read Value: Unknown 00:25:47.507 Deallocate in Write Zeroes: Not Supported 00:25:47.507 Deallocated Guard Field: 0xFFFF 00:25:47.507 Flush: Supported 00:25:47.507 Reservation: Not Supported 00:25:47.507 Namespace Sharing Capabilities: Multiple Controllers 00:25:47.507 Size (in LBAs): 1310720 (5GiB) 00:25:47.507 Capacity (in LBAs): 1310720 (5GiB) 00:25:47.507 Utilization (in LBAs): 1310720 (5GiB) 00:25:47.507 UUID: 79fa9d55-a0bb-4c01-9773-648021994bdb 00:25:47.507 Thin Provisioning: Not Supported 00:25:47.507 Per-NS Atomic Units: Yes 00:25:47.507 Atomic Boundary Size (Normal): 0 00:25:47.507 Atomic Boundary Size (PFail): 0 00:25:47.507 Atomic Boundary Offset: 0 00:25:47.507 NGUID/EUI64 Never Reused: No 00:25:47.507 ANA group ID: 1 00:25:47.507 Namespace Write Protected: No 00:25:47.507 Number of LBA Formats: 1 00:25:47.507 Current LBA Format: LBA Format #00 00:25:47.507 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:25:47.507 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:47.507 rmmod nvme_tcp 00:25:47.507 rmmod nvme_fabrics 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:47.507 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:47.766 11:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:48.701 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:48.701 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:48.701 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:48.701 ************************************ 00:25:48.701 END TEST nvmf_identify_kernel_target 00:25:48.701 ************************************ 00:25:48.701 00:25:48.701 real 0m3.412s 00:25:48.701 user 0m1.207s 00:25:48.701 sys 0m1.526s 00:25:48.701 11:28:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:48.701 11:28:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.701 11:28:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:48.701 11:28:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:48.701 11:28:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:48.701 11:28:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.701 ************************************ 00:25:48.701 START TEST nvmf_auth_host 00:25:48.701 ************************************ 00:25:48.701 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:48.701 * Looking for test storage... 00:25:48.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:48.701 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:48.701 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:48.701 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:48.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.961 --rc genhtml_branch_coverage=1 00:25:48.961 --rc genhtml_function_coverage=1 00:25:48.961 --rc genhtml_legend=1 00:25:48.961 --rc geninfo_all_blocks=1 00:25:48.961 --rc geninfo_unexecuted_blocks=1 00:25:48.961 00:25:48.961 ' 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:48.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.961 --rc genhtml_branch_coverage=1 00:25:48.961 --rc genhtml_function_coverage=1 00:25:48.961 --rc genhtml_legend=1 00:25:48.961 --rc geninfo_all_blocks=1 00:25:48.961 --rc geninfo_unexecuted_blocks=1 00:25:48.961 00:25:48.961 ' 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:48.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.961 --rc genhtml_branch_coverage=1 00:25:48.961 --rc genhtml_function_coverage=1 00:25:48.961 --rc genhtml_legend=1 00:25:48.961 --rc geninfo_all_blocks=1 00:25:48.961 --rc geninfo_unexecuted_blocks=1 00:25:48.961 00:25:48.961 ' 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:48.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.961 --rc genhtml_branch_coverage=1 00:25:48.961 --rc genhtml_function_coverage=1 00:25:48.961 --rc genhtml_legend=1 00:25:48.961 --rc geninfo_all_blocks=1 00:25:48.961 --rc geninfo_unexecuted_blocks=1 00:25:48.961 00:25:48.961 ' 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.961 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:48.962 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:48.962 Cannot find device "nvmf_init_br" 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:48.962 Cannot find device "nvmf_init_br2" 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:48.962 Cannot find device "nvmf_tgt_br" 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:48.962 Cannot find device "nvmf_tgt_br2" 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:48.962 Cannot find device "nvmf_init_br" 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:48.962 Cannot find device "nvmf_init_br2" 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:48.962 Cannot find device "nvmf_tgt_br" 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:48.962 Cannot find device "nvmf_tgt_br2" 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:48.962 Cannot find device "nvmf_br" 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:48.962 Cannot find device "nvmf_init_if" 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:25:48.962 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:49.221 Cannot find device "nvmf_init_if2" 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:49.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:49.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:49.221 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:49.221 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:49.221 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:49.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:49.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:25:49.481 00:25:49.481 --- 10.0.0.3 ping statistics --- 00:25:49.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.481 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:49.481 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:49.481 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:25:49.481 00:25:49.481 --- 10.0.0.4 ping statistics --- 00:25:49.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.481 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:49.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:25:49.481 00:25:49.481 --- 10.0.0.1 ping statistics --- 00:25:49.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.481 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:49.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:25:49.481 00:25:49.481 --- 10.0.0.2 ping statistics --- 00:25:49.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.481 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=85337 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 85337 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 85337 ']' 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.481 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=34c76a8226407b2a1fd8fc91f077171d 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.cX9 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 34c76a8226407b2a1fd8fc91f077171d 0 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 34c76a8226407b2a1fd8fc91f077171d 0 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:50.415 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:50.416 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=34c76a8226407b2a1fd8fc91f077171d 00:25:50.416 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:50.416 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.cX9 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.cX9 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.cX9 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=68489e00c4b62bb52c7245b1a736af968005c8af969711b1994859a9758710b8 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.P7t 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 68489e00c4b62bb52c7245b1a736af968005c8af969711b1994859a9758710b8 3 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 68489e00c4b62bb52c7245b1a736af968005c8af969711b1994859a9758710b8 3 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=68489e00c4b62bb52c7245b1a736af968005c8af969711b1994859a9758710b8 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.P7t 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.P7t 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.P7t 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=64750cde5f1de2bf939d490e7c4679214074b8dbf0e44fd9 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Fnk 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 64750cde5f1de2bf939d490e7c4679214074b8dbf0e44fd9 0 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 64750cde5f1de2bf939d490e7c4679214074b8dbf0e44fd9 0 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=64750cde5f1de2bf939d490e7c4679214074b8dbf0e44fd9 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Fnk 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Fnk 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Fnk 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b174e6ff0e284fde9b6309808aa68005ad8abb88913bbf3b 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1vq 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b174e6ff0e284fde9b6309808aa68005ad8abb88913bbf3b 2 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b174e6ff0e284fde9b6309808aa68005ad8abb88913bbf3b 2 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b174e6ff0e284fde9b6309808aa68005ad8abb88913bbf3b 00:25:50.674 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1vq 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1vq 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.1vq 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=608f3dba67e96542d23cd3d6ad9bbe0e 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.oU0 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 608f3dba67e96542d23cd3d6ad9bbe0e 1 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 608f3dba67e96542d23cd3d6ad9bbe0e 1 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=608f3dba67e96542d23cd3d6ad9bbe0e 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:50.675 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.oU0 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.oU0 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.oU0 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f43ff91d5c4397fa71929b49e2ee90e8 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.KNf 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f43ff91d5c4397fa71929b49e2ee90e8 1 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f43ff91d5c4397fa71929b49e2ee90e8 1 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f43ff91d5c4397fa71929b49e2ee90e8 00:25:50.933 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.KNf 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.KNf 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.KNf 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9461363dfc90d4324395ec97969c55efdcc3009a442963db 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.LIn 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9461363dfc90d4324395ec97969c55efdcc3009a442963db 2 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9461363dfc90d4324395ec97969c55efdcc3009a442963db 2 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9461363dfc90d4324395ec97969c55efdcc3009a442963db 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.LIn 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.LIn 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.LIn 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=88f2a2c8cd1ee42fae7218c806578d6f 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wJj 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 88f2a2c8cd1ee42fae7218c806578d6f 0 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 88f2a2c8cd1ee42fae7218c806578d6f 0 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=88f2a2c8cd1ee42fae7218c806578d6f 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wJj 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wJj 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.wJj 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4f011c069be539eace6838c3fea161449a4beea258bddf385bd2deba9250002a 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.zKb 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4f011c069be539eace6838c3fea161449a4beea258bddf385bd2deba9250002a 3 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4f011c069be539eace6838c3fea161449a4beea258bddf385bd2deba9250002a 3 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4f011c069be539eace6838c3fea161449a4beea258bddf385bd2deba9250002a 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:50.934 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:51.194 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.zKb 00:25:51.194 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.zKb 00:25:51.194 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.zKb 00:25:51.194 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:51.194 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 85337 00:25:51.194 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 85337 ']' 00:25:51.194 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.194 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:51.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.194 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.194 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:51.194 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cX9 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.P7t ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.P7t 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Fnk 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.1vq ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1vq 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.oU0 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.KNf ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KNf 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.LIn 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.wJj ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.wJj 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.zKb 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:51.469 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:51.751 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:51.751 Waiting for block devices as requested 00:25:51.751 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:52.009 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:52.576 No valid GPT data, bailing 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:52.576 No valid GPT data, bailing 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:52.576 No valid GPT data, bailing 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:52.576 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:25:52.577 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:25:52.577 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:52.835 No valid GPT data, bailing 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -a 10.0.0.1 -t tcp -s 4420 00:25:52.836 00:25:52.836 Discovery Log Number of Records 2, Generation counter 2 00:25:52.836 =====Discovery Log Entry 0====== 00:25:52.836 trtype: tcp 00:25:52.836 adrfam: ipv4 00:25:52.836 subtype: current discovery subsystem 00:25:52.836 treq: not specified, sq flow control disable supported 00:25:52.836 portid: 1 00:25:52.836 trsvcid: 4420 00:25:52.836 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:52.836 traddr: 10.0.0.1 00:25:52.836 eflags: none 00:25:52.836 sectype: none 00:25:52.836 =====Discovery Log Entry 1====== 00:25:52.836 trtype: tcp 00:25:52.836 adrfam: ipv4 00:25:52.836 subtype: nvme subsystem 00:25:52.836 treq: not specified, sq flow control disable supported 00:25:52.836 portid: 1 00:25:52.836 trsvcid: 4420 00:25:52.836 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:52.836 traddr: 10.0.0.1 00:25:52.836 eflags: none 00:25:52.836 sectype: none 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.836 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.095 nvme0n1 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: ]] 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.095 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.354 nvme0n1 00:25:53.354 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.354 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.354 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.354 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.354 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.354 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.354 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:53.355 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.355 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.355 nvme0n1 00:25:53.355 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.355 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.355 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.355 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.355 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.355 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.613 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.613 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.613 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.613 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.613 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.613 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.613 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:53.613 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.613 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.614 nvme0n1 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: ]] 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.614 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.873 nvme0n1 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.873 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.132 nvme0n1 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.132 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: ]] 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.391 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.649 nvme0n1 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.649 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.650 nvme0n1 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.650 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.909 nvme0n1 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.909 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.168 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.168 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.168 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:55.168 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.168 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.168 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.168 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:55.168 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: ]] 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.169 nvme0n1 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.169 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.428 nvme0n1 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.428 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: ]] 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.995 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.254 nvme0n1 00:25:56.254 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.254 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.254 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.254 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.254 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.254 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.254 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.513 nvme0n1 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.513 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.771 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.771 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.771 nvme0n1 00:25:56.771 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.771 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.771 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.771 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.771 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.771 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.771 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.772 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.772 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.772 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.030 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.030 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.030 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: ]] 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.031 nvme0n1 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.031 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.289 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.290 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.290 nvme0n1 00:25:57.290 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.290 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.290 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.290 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.290 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.290 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.548 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: ]] 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.446 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.446 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.446 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.446 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.446 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.446 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.446 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.446 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.446 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:59.446 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.446 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.705 nvme0n1 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.705 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.277 nvme0n1 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.277 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.278 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.278 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.278 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:00.278 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.278 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.536 nvme0n1 00:26:00.536 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.536 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.536 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.536 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: ]] 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.537 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.105 nvme0n1 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.105 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.365 nvme0n1 00:26:01.365 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.365 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.365 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.365 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.365 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.365 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: ]] 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.623 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.624 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.624 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.624 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.191 nvme0n1 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.191 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.758 nvme0n1 00:26:02.758 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.758 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.758 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.758 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.758 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.758 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.017 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.018 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.018 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:03.018 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.018 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.585 nvme0n1 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: ]] 00:26:03.585 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.586 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.521 nvme0n1 00:26:04.521 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.521 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.521 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.521 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.521 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.521 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.521 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.089 nvme0n1 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: ]] 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.089 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.349 nvme0n1 00:26:05.349 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.349 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.349 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.349 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.349 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.349 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.349 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.349 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.349 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.349 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.349 nvme0n1 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.349 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.609 nvme0n1 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:05.609 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: ]] 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.610 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.869 nvme0n1 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.869 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.129 nvme0n1 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: ]] 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.129 nvme0n1 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.129 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.388 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.388 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.388 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.388 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.388 nvme0n1 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.388 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.389 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.389 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.389 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.389 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.389 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.389 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.389 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.648 nvme0n1 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: ]] 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.648 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.908 nvme0n1 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.908 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.168 nvme0n1 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: ]] 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.168 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.443 nvme0n1 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.443 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.747 nvme0n1 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.747 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.748 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.007 nvme0n1 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: ]] 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.007 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.265 nvme0n1 00:26:08.265 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.265 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.266 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.524 nvme0n1 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: ]] 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.524 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.090 nvme0n1 00:26:09.090 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.090 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.090 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.090 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.090 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.090 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.090 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.091 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.349 nvme0n1 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.349 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.350 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.350 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.350 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.350 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.350 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.916 nvme0n1 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: ]] 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.916 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.917 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.917 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.175 nvme0n1 00:26:10.175 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.175 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.175 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.175 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.175 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.175 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.434 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.693 nvme0n1 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: ]] 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.693 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.260 nvme0n1 00:26:11.260 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.260 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.260 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.260 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.260 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.260 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.518 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.518 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.518 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.518 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.518 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.518 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.518 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:11.518 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.519 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.087 nvme0n1 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.087 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.654 nvme0n1 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: ]] 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.654 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.914 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.914 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.914 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.914 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.914 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.914 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.914 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.914 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.914 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.914 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.914 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.914 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.914 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.914 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.914 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.480 nvme0n1 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.480 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.047 nvme0n1 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: ]] 00:26:14.047 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.048 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.307 nvme0n1 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.307 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.307 nvme0n1 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.307 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.567 nvme0n1 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:14.567 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: ]] 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.568 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.826 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.826 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.826 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.826 nvme0n1 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.827 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.086 nvme0n1 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: ]] 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.086 nvme0n1 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.086 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.346 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.346 nvme0n1 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.346 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.605 nvme0n1 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: ]] 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.605 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.606 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.606 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.606 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.606 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.606 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.606 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.864 nvme0n1 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.864 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.865 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.865 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.865 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.124 nvme0n1 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: ]] 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.124 11:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.383 nvme0n1 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.383 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.654 nvme0n1 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.654 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.925 nvme0n1 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: ]] 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.925 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.184 nvme0n1 00:26:17.184 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.184 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.184 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.184 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.184 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.184 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.184 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.185 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.444 nvme0n1 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: ]] 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.444 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.012 nvme0n1 00:26:18.012 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.012 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.012 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.012 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.012 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.012 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.013 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.272 nvme0n1 00:26:18.272 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.272 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.272 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.272 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.272 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.272 11:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.272 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.839 nvme0n1 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: ]] 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.839 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.098 nvme0n1 00:26:19.098 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.098 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.098 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.098 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.098 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.098 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.098 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.098 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.357 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.616 nvme0n1 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.616 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRjNzZhODIyNjQwN2IyYTFmZDhmYzkxZjA3NzE3MWRISeb+: 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: ]] 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njg0ODllMDBjNGI2MmJiNTJjNzI0NWIxYTczNmFmOTY4MDA1YzhhZjk2OTcxMWIxOTk0ODU5YTk3NTg3MTBiOCr76uk=: 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.617 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.554 nvme0n1 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.554 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.122 nvme0n1 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:21.122 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.123 11:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.059 nvme0n1 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTQ2MTM2M2RmYzkwZDQzMjQzOTVlYzk3OTY5YzU1ZWZkY2MzMDA5YTQ0Mjk2M2RiDpv1vw==: 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: ]] 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhmMmEyYzhjZDFlZTQyZmFlNzIxOGM4MDY1NzhkNma8PsU7: 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.059 11:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.628 nvme0n1 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGYwMTFjMDY5YmU1MzllYWNlNjgzOGMzZmVhMTYxNDQ5YTRiZWVhMjU4YmRkZjM4NWJkMmRlYmE5MjUwMDAyYRdXAvA=: 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.628 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.195 nvme0n1 00:26:23.195 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.195 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.195 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.195 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.195 11:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.195 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.454 request: 00:26:23.454 { 00:26:23.454 "name": "nvme0", 00:26:23.454 "trtype": "tcp", 00:26:23.454 "traddr": "10.0.0.1", 00:26:23.454 "adrfam": "ipv4", 00:26:23.454 "trsvcid": "4420", 00:26:23.454 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:23.454 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:23.454 "prchk_reftag": false, 00:26:23.454 "prchk_guard": false, 00:26:23.454 "hdgst": false, 00:26:23.454 "ddgst": false, 00:26:23.454 "allow_unrecognized_csi": false, 00:26:23.454 "method": "bdev_nvme_attach_controller", 00:26:23.454 "req_id": 1 00:26:23.454 } 00:26:23.454 Got JSON-RPC error response 00:26:23.454 response: 00:26:23.454 { 00:26:23.454 "code": -5, 00:26:23.454 "message": "Input/output error" 00:26:23.454 } 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.454 request: 00:26:23.454 { 00:26:23.454 "name": "nvme0", 00:26:23.454 "trtype": "tcp", 00:26:23.454 "traddr": "10.0.0.1", 00:26:23.454 "adrfam": "ipv4", 00:26:23.454 "trsvcid": "4420", 00:26:23.454 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:23.454 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:23.454 "prchk_reftag": false, 00:26:23.454 "prchk_guard": false, 00:26:23.454 "hdgst": false, 00:26:23.454 "ddgst": false, 00:26:23.454 "dhchap_key": "key2", 00:26:23.454 "allow_unrecognized_csi": false, 00:26:23.454 "method": "bdev_nvme_attach_controller", 00:26:23.454 "req_id": 1 00:26:23.454 } 00:26:23.454 Got JSON-RPC error response 00:26:23.454 response: 00:26:23.454 { 00:26:23.454 "code": -5, 00:26:23.454 "message": "Input/output error" 00:26:23.454 } 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.454 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.714 request: 00:26:23.714 { 00:26:23.714 "name": "nvme0", 00:26:23.714 "trtype": "tcp", 00:26:23.714 "traddr": "10.0.0.1", 00:26:23.714 "adrfam": "ipv4", 00:26:23.714 "trsvcid": "4420", 00:26:23.714 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:23.714 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:23.714 "prchk_reftag": false, 00:26:23.714 "prchk_guard": false, 00:26:23.714 "hdgst": false, 00:26:23.714 "ddgst": false, 00:26:23.714 "dhchap_key": "key1", 00:26:23.714 "dhchap_ctrlr_key": "ckey2", 00:26:23.714 "allow_unrecognized_csi": false, 00:26:23.714 "method": "bdev_nvme_attach_controller", 00:26:23.714 "req_id": 1 00:26:23.714 } 00:26:23.714 Got JSON-RPC error response 00:26:23.714 response: 00:26:23.714 { 00:26:23.714 "code": -5, 00:26:23.714 "message": "Input/output error" 00:26:23.714 } 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.714 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.715 nvme0n1 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.715 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.974 request: 00:26:23.974 { 00:26:23.974 "name": "nvme0", 00:26:23.974 "dhchap_key": "key1", 00:26:23.974 "dhchap_ctrlr_key": "ckey2", 00:26:23.974 "method": "bdev_nvme_set_keys", 00:26:23.974 "req_id": 1 00:26:23.974 } 00:26:23.974 Got JSON-RPC error response 00:26:23.974 response: 00:26:23.974 { 00:26:23.974 "code": -13, 00:26:23.974 "message": "Permission denied" 00:26:23.974 } 00:26:23.974 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:23.974 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:23.974 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:23.974 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:23.974 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:23.974 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:23.974 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.974 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.974 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.974 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.974 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:23.974 11:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3NTBjZGU1ZjFkZTJiZjkzOWQ0OTBlN2M0Njc5MjE0MDc0YjhkYmYwZTQ0ZmQ5LZW9Iw==: 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: ]] 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE3NGU2ZmYwZTI4NGZkZTliNjMwOTgwOGFhNjgwMDVhZDhhYmI4ODkxM2JiZjNi5FvOQg==: 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.910 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.169 nvme0n1 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4ZjNkYmE2N2U5NjU0MmQyM2NkM2Q2YWQ5YmJlMGWeKEQB: 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: ]] 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzZmY5MWQ1YzQzOTdmYTcxOTI5YjQ5ZTJlZTkwZTgXR8Ln: 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.169 request: 00:26:25.169 { 00:26:25.169 "name": "nvme0", 00:26:25.169 "dhchap_key": "key2", 00:26:25.169 "dhchap_ctrlr_key": "ckey1", 00:26:25.169 "method": "bdev_nvme_set_keys", 00:26:25.169 "req_id": 1 00:26:25.169 } 00:26:25.169 Got JSON-RPC error response 00:26:25.169 response: 00:26:25.169 { 00:26:25.169 "code": -13, 00:26:25.169 "message": "Permission denied" 00:26:25.169 } 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:25.169 11:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:26.113 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.113 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:26.113 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.113 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.113 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.113 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:26.113 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:26.113 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:26.113 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:26.113 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:26.113 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:26.113 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:26.113 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:26.113 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:26.114 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:26.114 rmmod nvme_tcp 00:26:26.389 rmmod nvme_fabrics 00:26:26.389 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:26.389 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:26.389 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:26.389 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 85337 ']' 00:26:26.389 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 85337 00:26:26.389 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 85337 ']' 00:26:26.389 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 85337 00:26:26.389 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:26.389 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.389 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85337 00:26:26.389 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:26.389 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:26.389 killing process with pid 85337 00:26:26.389 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85337' 00:26:26.389 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 85337 00:26:26.389 11:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 85337 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:27.326 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:27.585 11:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:28.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:28.416 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:28.416 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:28.416 11:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.cX9 /tmp/spdk.key-null.Fnk /tmp/spdk.key-sha256.oU0 /tmp/spdk.key-sha384.LIn /tmp/spdk.key-sha512.zKb /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:26:28.416 11:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:28.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:28.983 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:28.983 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:28.983 00:26:28.983 real 0m40.171s 00:26:28.983 user 0m36.045s 00:26:28.983 sys 0m4.042s 00:26:28.983 11:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:28.983 11:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.983 ************************************ 00:26:28.983 END TEST nvmf_auth_host 00:26:28.983 ************************************ 00:26:28.983 11:29:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:28.983 11:29:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:28.983 11:29:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:28.983 11:29:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:28.983 11:29:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.983 ************************************ 00:26:28.983 START TEST nvmf_digest 00:26:28.983 ************************************ 00:26:28.983 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:28.983 * Looking for test storage... 00:26:28.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:28.983 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:28.983 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:26:28.983 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:29.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.242 --rc genhtml_branch_coverage=1 00:26:29.242 --rc genhtml_function_coverage=1 00:26:29.242 --rc genhtml_legend=1 00:26:29.242 --rc geninfo_all_blocks=1 00:26:29.242 --rc geninfo_unexecuted_blocks=1 00:26:29.242 00:26:29.242 ' 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:29.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.242 --rc genhtml_branch_coverage=1 00:26:29.242 --rc genhtml_function_coverage=1 00:26:29.242 --rc genhtml_legend=1 00:26:29.242 --rc geninfo_all_blocks=1 00:26:29.242 --rc geninfo_unexecuted_blocks=1 00:26:29.242 00:26:29.242 ' 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:29.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.242 --rc genhtml_branch_coverage=1 00:26:29.242 --rc genhtml_function_coverage=1 00:26:29.242 --rc genhtml_legend=1 00:26:29.242 --rc geninfo_all_blocks=1 00:26:29.242 --rc geninfo_unexecuted_blocks=1 00:26:29.242 00:26:29.242 ' 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:29.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.242 --rc genhtml_branch_coverage=1 00:26:29.242 --rc genhtml_function_coverage=1 00:26:29.242 --rc genhtml_legend=1 00:26:29.242 --rc geninfo_all_blocks=1 00:26:29.242 --rc geninfo_unexecuted_blocks=1 00:26:29.242 00:26:29.242 ' 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:29.242 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:29.243 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:29.243 Cannot find device "nvmf_init_br" 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:29.243 Cannot find device "nvmf_init_br2" 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:29.243 Cannot find device "nvmf_tgt_br" 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:29.243 Cannot find device "nvmf_tgt_br2" 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:29.243 Cannot find device "nvmf_init_br" 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:29.243 Cannot find device "nvmf_init_br2" 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:29.243 Cannot find device "nvmf_tgt_br" 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:29.243 Cannot find device "nvmf_tgt_br2" 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:29.243 Cannot find device "nvmf_br" 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:29.243 Cannot find device "nvmf_init_if" 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:26:29.243 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:29.243 Cannot find device "nvmf_init_if2" 00:26:29.243 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:26:29.243 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:29.243 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:29.243 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:26:29.243 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:29.243 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:29.243 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:26:29.244 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:29.244 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:29.244 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:29.502 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:29.502 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:26:29.502 00:26:29.502 --- 10.0.0.3 ping statistics --- 00:26:29.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.502 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:29.502 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:29.502 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:26:29.502 00:26:29.502 --- 10.0.0.4 ping statistics --- 00:26:29.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.502 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:29.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:26:29.502 00:26:29.502 --- 10.0.0.1 ping statistics --- 00:26:29.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.502 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:29.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:26:29.502 00:26:29.502 --- 10.0.0.2 ping statistics --- 00:26:29.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.502 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:29.502 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:29.503 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.503 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:29.503 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:29.503 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:29.503 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:29.503 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:29.503 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:29.503 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:29.503 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:29.761 ************************************ 00:26:29.761 START TEST nvmf_digest_clean 00:26:29.761 ************************************ 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=87006 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 87006 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 87006 ']' 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.761 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:29.761 [2024-12-10 11:29:36.473181] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:26:29.761 [2024-12-10 11:29:36.473372] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.020 [2024-12-10 11:29:36.665455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.020 [2024-12-10 11:29:36.797573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.020 [2024-12-10 11:29:36.797652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.020 [2024-12-10 11:29:36.797676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.020 [2024-12-10 11:29:36.797705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.020 [2024-12-10 11:29:36.797726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.020 [2024-12-10 11:29:36.799271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.956 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:30.956 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:30.956 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:30.956 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:30.956 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:30.956 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.956 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:30.956 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:30.956 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:30.956 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.956 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:30.956 [2024-12-10 11:29:37.696074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:31.215 null0 00:26:31.215 [2024-12-10 11:29:37.817931] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.215 [2024-12-10 11:29:37.842095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87038 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87038 /var/tmp/bperf.sock 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 87038 ']' 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.215 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:31.216 [2024-12-10 11:29:37.962988] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:26:31.216 [2024-12-10 11:29:37.963150] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87038 ] 00:26:31.474 [2024-12-10 11:29:38.146681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.474 [2024-12-10 11:29:38.251885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.410 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.410 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:32.410 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:32.410 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:32.410 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:32.669 [2024-12-10 11:29:39.355314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:32.928 11:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.928 11:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:33.187 nvme0n1 00:26:33.187 11:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:33.187 11:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:33.187 Running I/O for 2 seconds... 00:26:35.502 11430.00 IOPS, 44.65 MiB/s [2024-12-10T11:29:42.328Z] 11684.00 IOPS, 45.64 MiB/s 00:26:35.502 Latency(us) 00:26:35.502 [2024-12-10T11:29:42.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.502 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:35.502 nvme0n1 : 2.01 11668.48 45.58 0.00 0.00 10960.42 9949.56 23473.80 00:26:35.502 [2024-12-10T11:29:42.328Z] =================================================================================================================== 00:26:35.502 [2024-12-10T11:29:42.328Z] Total : 11668.48 45.58 0.00 0.00 10960.42 9949.56 23473.80 00:26:35.502 { 00:26:35.502 "results": [ 00:26:35.502 { 00:26:35.502 "job": "nvme0n1", 00:26:35.502 "core_mask": "0x2", 00:26:35.502 "workload": "randread", 00:26:35.502 "status": "finished", 00:26:35.502 "queue_depth": 128, 00:26:35.502 "io_size": 4096, 00:26:35.502 "runtime": 2.01363, 00:26:35.502 "iops": 11668.479313478643, 00:26:35.502 "mibps": 45.57999731827595, 00:26:35.502 "io_failed": 0, 00:26:35.502 "io_timeout": 0, 00:26:35.502 "avg_latency_us": 10960.422843346643, 00:26:35.502 "min_latency_us": 9949.556363636364, 00:26:35.502 "max_latency_us": 23473.803636363635 00:26:35.502 } 00:26:35.502 ], 00:26:35.502 "core_count": 1 00:26:35.502 } 00:26:35.502 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:35.502 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:35.502 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:35.502 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:35.502 | select(.opcode=="crc32c") 00:26:35.502 | "\(.module_name) \(.executed)"' 00:26:35.502 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:35.502 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:35.502 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:35.502 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:35.502 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:35.502 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87038 00:26:35.502 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 87038 ']' 00:26:35.502 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 87038 00:26:35.502 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:35.502 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.502 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87038 00:26:35.761 killing process with pid 87038 00:26:35.761 Received shutdown signal, test time was about 2.000000 seconds 00:26:35.761 00:26:35.761 Latency(us) 00:26:35.761 [2024-12-10T11:29:42.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.761 [2024-12-10T11:29:42.587Z] =================================================================================================================== 00:26:35.761 [2024-12-10T11:29:42.587Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:35.761 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:35.761 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:35.761 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87038' 00:26:35.761 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 87038 00:26:35.761 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 87038 00:26:36.697 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:36.697 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:36.697 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:36.697 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:36.697 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:36.697 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:36.697 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:36.698 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87111 00:26:36.698 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87111 /var/tmp/bperf.sock 00:26:36.698 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:36.698 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 87111 ']' 00:26:36.698 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:36.698 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.698 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:36.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:36.698 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.698 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:36.698 [2024-12-10 11:29:43.406869] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:26:36.698 [2024-12-10 11:29:43.407263] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87111 ] 00:26:36.698 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:36.698 Zero copy mechanism will not be used. 00:26:36.956 [2024-12-10 11:29:43.581714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.956 [2024-12-10 11:29:43.684391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.920 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:37.920 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:37.920 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:37.920 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:37.920 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:38.179 [2024-12-10 11:29:44.854804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:38.179 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:38.179 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:38.745 nvme0n1 00:26:38.745 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:38.745 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:38.745 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:38.745 Zero copy mechanism will not be used. 00:26:38.745 Running I/O for 2 seconds... 00:26:40.618 5888.00 IOPS, 736.00 MiB/s [2024-12-10T11:29:47.445Z] 5896.00 IOPS, 737.00 MiB/s 00:26:40.619 Latency(us) 00:26:40.619 [2024-12-10T11:29:47.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.619 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:40.619 nvme0n1 : 2.00 5893.67 736.71 0.00 0.00 2710.52 2353.34 4885.41 00:26:40.619 [2024-12-10T11:29:47.445Z] =================================================================================================================== 00:26:40.619 [2024-12-10T11:29:47.445Z] Total : 5893.67 736.71 0.00 0.00 2710.52 2353.34 4885.41 00:26:40.619 { 00:26:40.619 "results": [ 00:26:40.619 { 00:26:40.619 "job": "nvme0n1", 00:26:40.619 "core_mask": "0x2", 00:26:40.619 "workload": "randread", 00:26:40.619 "status": "finished", 00:26:40.619 "queue_depth": 16, 00:26:40.619 "io_size": 131072, 00:26:40.619 "runtime": 2.003505, 00:26:40.619 "iops": 5893.671340974942, 00:26:40.619 "mibps": 736.7089176218677, 00:26:40.619 "io_failed": 0, 00:26:40.619 "io_timeout": 0, 00:26:40.619 "avg_latency_us": 2710.5247203744766, 00:26:40.619 "min_latency_us": 2353.338181818182, 00:26:40.619 "max_latency_us": 4885.410909090909 00:26:40.619 } 00:26:40.619 ], 00:26:40.619 "core_count": 1 00:26:40.619 } 00:26:40.877 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:40.877 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:40.877 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:40.877 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:40.877 | select(.opcode=="crc32c") 00:26:40.877 | "\(.module_name) \(.executed)"' 00:26:40.877 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:41.135 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:41.135 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:41.135 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:41.135 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:41.135 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87111 00:26:41.135 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 87111 ']' 00:26:41.135 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 87111 00:26:41.135 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:41.135 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:41.135 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87111 00:26:41.135 killing process with pid 87111 00:26:41.135 Received shutdown signal, test time was about 2.000000 seconds 00:26:41.135 00:26:41.135 Latency(us) 00:26:41.135 [2024-12-10T11:29:47.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.135 [2024-12-10T11:29:47.961Z] =================================================================================================================== 00:26:41.135 [2024-12-10T11:29:47.961Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:41.135 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:41.135 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:41.135 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87111' 00:26:41.135 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 87111 00:26:41.135 11:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 87111 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87179 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87179 /var/tmp/bperf.sock 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 87179 ']' 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:42.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:42.072 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:42.331 [2024-12-10 11:29:48.917561] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:26:42.331 [2024-12-10 11:29:48.917939] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87179 ] 00:26:42.331 [2024-12-10 11:29:49.094817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.590 [2024-12-10 11:29:49.194867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.157 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.157 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:43.157 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:43.157 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:43.157 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:43.724 [2024-12-10 11:29:50.338927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:43.724 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:43.724 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:43.982 nvme0n1 00:26:43.982 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:43.982 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.240 Running I/O for 2 seconds... 00:26:46.109 12830.00 IOPS, 50.12 MiB/s [2024-12-10T11:29:52.935Z] 12955.50 IOPS, 50.61 MiB/s 00:26:46.109 Latency(us) 00:26:46.109 [2024-12-10T11:29:52.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.109 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:46.109 nvme0n1 : 2.01 12970.11 50.66 0.00 0.00 9858.57 2964.01 18945.86 00:26:46.109 [2024-12-10T11:29:52.935Z] =================================================================================================================== 00:26:46.109 [2024-12-10T11:29:52.935Z] Total : 12970.11 50.66 0.00 0.00 9858.57 2964.01 18945.86 00:26:46.109 { 00:26:46.109 "results": [ 00:26:46.109 { 00:26:46.109 "job": "nvme0n1", 00:26:46.109 "core_mask": "0x2", 00:26:46.109 "workload": "randwrite", 00:26:46.109 "status": "finished", 00:26:46.109 "queue_depth": 128, 00:26:46.109 "io_size": 4096, 00:26:46.109 "runtime": 2.007616, 00:26:46.109 "iops": 12970.109821798591, 00:26:46.109 "mibps": 50.664491491400746, 00:26:46.109 "io_failed": 0, 00:26:46.109 "io_timeout": 0, 00:26:46.109 "avg_latency_us": 9858.573451151942, 00:26:46.109 "min_latency_us": 2964.0145454545454, 00:26:46.109 "max_latency_us": 18945.861818181816 00:26:46.109 } 00:26:46.109 ], 00:26:46.109 "core_count": 1 00:26:46.109 } 00:26:46.367 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:46.367 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:46.367 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:46.367 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:46.367 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:46.367 | select(.opcode=="crc32c") 00:26:46.367 | "\(.module_name) \(.executed)"' 00:26:46.626 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:46.626 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:46.626 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:46.626 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:46.626 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87179 00:26:46.626 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 87179 ']' 00:26:46.626 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 87179 00:26:46.626 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:46.626 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.626 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87179 00:26:46.626 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:46.626 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:46.626 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87179' 00:26:46.626 killing process with pid 87179 00:26:46.626 Received shutdown signal, test time was about 2.000000 seconds 00:26:46.626 00:26:46.626 Latency(us) 00:26:46.626 [2024-12-10T11:29:53.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.626 [2024-12-10T11:29:53.452Z] =================================================================================================================== 00:26:46.626 [2024-12-10T11:29:53.452Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:46.626 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 87179 00:26:46.626 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 87179 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87250 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87250 /var/tmp/bperf.sock 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 87250 ']' 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:47.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:47.563 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:47.823 [2024-12-10 11:29:54.400295] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:26:47.823 [2024-12-10 11:29:54.401162] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87250 ] 00:26:47.823 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:47.823 Zero copy mechanism will not be used. 00:26:47.823 [2024-12-10 11:29:54.579082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.084 [2024-12-10 11:29:54.681641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.650 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.650 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:48.650 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:48.650 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:48.650 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:49.217 [2024-12-10 11:29:55.865246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:49.217 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:49.217 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:49.784 nvme0n1 00:26:49.784 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:49.784 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:49.785 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:49.785 Zero copy mechanism will not be used. 00:26:49.785 Running I/O for 2 seconds... 00:26:52.097 4746.00 IOPS, 593.25 MiB/s [2024-12-10T11:29:58.923Z] 4770.00 IOPS, 596.25 MiB/s 00:26:52.097 Latency(us) 00:26:52.097 [2024-12-10T11:29:58.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.097 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:52.097 nvme0n1 : 2.00 4769.50 596.19 0.00 0.00 3345.02 2234.18 6374.87 00:26:52.097 [2024-12-10T11:29:58.923Z] =================================================================================================================== 00:26:52.097 [2024-12-10T11:29:58.923Z] Total : 4769.50 596.19 0.00 0.00 3345.02 2234.18 6374.87 00:26:52.097 { 00:26:52.097 "results": [ 00:26:52.097 { 00:26:52.097 "job": "nvme0n1", 00:26:52.097 "core_mask": "0x2", 00:26:52.097 "workload": "randwrite", 00:26:52.097 "status": "finished", 00:26:52.097 "queue_depth": 16, 00:26:52.097 "io_size": 131072, 00:26:52.097 "runtime": 2.004822, 00:26:52.097 "iops": 4769.500733730974, 00:26:52.097 "mibps": 596.1875917163718, 00:26:52.097 "io_failed": 0, 00:26:52.097 "io_timeout": 0, 00:26:52.097 "avg_latency_us": 3345.015522808085, 00:26:52.097 "min_latency_us": 2234.181818181818, 00:26:52.097 "max_latency_us": 6374.865454545455 00:26:52.097 } 00:26:52.097 ], 00:26:52.097 "core_count": 1 00:26:52.097 } 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:52.097 | select(.opcode=="crc32c") 00:26:52.097 | "\(.module_name) \(.executed)"' 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87250 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 87250 ']' 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 87250 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87250 00:26:52.097 killing process with pid 87250 00:26:52.097 Received shutdown signal, test time was about 2.000000 seconds 00:26:52.097 00:26:52.097 Latency(us) 00:26:52.097 [2024-12-10T11:29:58.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.097 [2024-12-10T11:29:58.923Z] =================================================================================================================== 00:26:52.097 [2024-12-10T11:29:58.923Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87250' 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 87250 00:26:52.097 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 87250 00:26:53.470 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 87006 00:26:53.470 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 87006 ']' 00:26:53.470 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 87006 00:26:53.470 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:53.470 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.470 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87006 00:26:53.470 killing process with pid 87006 00:26:53.470 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:53.471 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:53.471 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87006' 00:26:53.471 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 87006 00:26:53.471 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 87006 00:26:54.406 ************************************ 00:26:54.406 END TEST nvmf_digest_clean 00:26:54.406 ************************************ 00:26:54.406 00:26:54.406 real 0m24.589s 00:26:54.406 user 0m47.906s 00:26:54.406 sys 0m4.691s 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 ************************************ 00:26:54.406 START TEST nvmf_digest_error 00:26:54.406 ************************************ 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=87358 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 87358 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 87358 ']' 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.406 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 [2024-12-10 11:30:01.106085] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:26:54.406 [2024-12-10 11:30:01.106456] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.665 [2024-12-10 11:30:01.290566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.665 [2024-12-10 11:30:01.392135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.665 [2024-12-10 11:30:01.392435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.665 [2024-12-10 11:30:01.392552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.665 [2024-12-10 11:30:01.392667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.665 [2024-12-10 11:30:01.392745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.665 [2024-12-10 11:30:01.394220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.601 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:55.601 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:55.601 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:55.601 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:55.601 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:55.601 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.601 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:55.601 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.601 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:55.601 [2024-12-10 11:30:02.103484] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:55.601 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.601 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:55.601 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:55.601 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.601 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:55.601 [2024-12-10 11:30:02.297287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:55.601 null0 00:26:55.601 [2024-12-10 11:30:02.425310] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.860 [2024-12-10 11:30:02.449539] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:55.860 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.860 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:55.860 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:55.860 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:55.860 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:55.860 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:55.860 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87392 00:26:55.860 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:55.860 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87392 /var/tmp/bperf.sock 00:26:55.860 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 87392 ']' 00:26:55.860 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:55.860 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:55.860 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:55.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:55.860 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:55.860 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:55.860 [2024-12-10 11:30:02.555364] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:26:55.860 [2024-12-10 11:30:02.555736] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87392 ] 00:26:56.118 [2024-12-10 11:30:02.728079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.118 [2024-12-10 11:30:02.832399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.377 [2024-12-10 11:30:03.016612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:56.944 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.944 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:56.944 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:56.944 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:57.202 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:57.202 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.202 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.202 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.202 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.202 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.461 nvme0n1 00:26:57.461 11:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:57.461 11:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.461 11:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.461 11:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.461 11:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:57.461 11:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:57.720 Running I/O for 2 seconds... 00:26:57.721 [2024-12-10 11:30:04.330423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.721 [2024-12-10 11:30:04.330515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.721 [2024-12-10 11:30:04.330542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.721 [2024-12-10 11:30:04.353824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.721 [2024-12-10 11:30:04.353894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.721 [2024-12-10 11:30:04.353923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.721 [2024-12-10 11:30:04.377097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.721 [2024-12-10 11:30:04.377186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.721 [2024-12-10 11:30:04.377210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.721 [2024-12-10 11:30:04.399963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.721 [2024-12-10 11:30:04.400034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.721 [2024-12-10 11:30:04.400077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.721 [2024-12-10 11:30:04.422316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.721 [2024-12-10 11:30:04.422597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.721 [2024-12-10 11:30:04.422626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.721 [2024-12-10 11:30:04.444957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.721 [2024-12-10 11:30:04.445018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.721 [2024-12-10 11:30:04.445045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.721 [2024-12-10 11:30:04.467345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.721 [2024-12-10 11:30:04.467439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.721 [2024-12-10 11:30:04.467463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.721 [2024-12-10 11:30:04.489710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.721 [2024-12-10 11:30:04.489936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.721 [2024-12-10 11:30:04.489974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.721 [2024-12-10 11:30:04.512195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.721 [2024-12-10 11:30:04.512449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.721 [2024-12-10 11:30:04.512478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.721 [2024-12-10 11:30:04.534856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.721 [2024-12-10 11:30:04.534918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.721 [2024-12-10 11:30:04.534976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.980 [2024-12-10 11:30:04.557460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.980 [2024-12-10 11:30:04.557532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.980 [2024-12-10 11:30:04.557556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.980 [2024-12-10 11:30:04.579541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.980 [2024-12-10 11:30:04.579631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.980 [2024-12-10 11:30:04.579656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.980 [2024-12-10 11:30:04.601230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.980 [2024-12-10 11:30:04.601505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.980 [2024-12-10 11:30:04.601534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.980 [2024-12-10 11:30:04.623133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.980 [2024-12-10 11:30:04.623206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.980 [2024-12-10 11:30:04.623231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.980 [2024-12-10 11:30:04.644741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.980 [2024-12-10 11:30:04.644837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.980 [2024-12-10 11:30:04.644860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.980 [2024-12-10 11:30:04.666508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.980 [2024-12-10 11:30:04.666567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.980 [2024-12-10 11:30:04.666598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.980 [2024-12-10 11:30:04.689227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.980 [2024-12-10 11:30:04.689293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.980 [2024-12-10 11:30:04.689317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.980 [2024-12-10 11:30:04.712105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.980 [2024-12-10 11:30:04.712167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.980 [2024-12-10 11:30:04.712193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.980 [2024-12-10 11:30:04.734295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.980 [2024-12-10 11:30:04.734385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.980 [2024-12-10 11:30:04.734411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.980 [2024-12-10 11:30:04.756659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.980 [2024-12-10 11:30:04.756721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.980 [2024-12-10 11:30:04.756749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.980 [2024-12-10 11:30:04.779406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.980 [2024-12-10 11:30:04.779472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.980 [2024-12-10 11:30:04.779495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.980 [2024-12-10 11:30:04.801567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:57.980 [2024-12-10 11:30:04.801640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.980 [2024-12-10 11:30:04.801666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.241 [2024-12-10 11:30:04.823514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.241 [2024-12-10 11:30:04.823584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.241 [2024-12-10 11:30:04.823607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.241 [2024-12-10 11:30:04.846098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.241 [2024-12-10 11:30:04.846176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.241 [2024-12-10 11:30:04.846203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.241 [2024-12-10 11:30:04.868421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.241 [2024-12-10 11:30:04.868521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.241 [2024-12-10 11:30:04.868546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.241 [2024-12-10 11:30:04.891336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.241 [2024-12-10 11:30:04.891416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.241 [2024-12-10 11:30:04.891444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.241 [2024-12-10 11:30:04.914021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.241 [2024-12-10 11:30:04.914242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.241 [2024-12-10 11:30:04.914270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.241 [2024-12-10 11:30:04.936259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.241 [2024-12-10 11:30:04.936371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.241 [2024-12-10 11:30:04.936410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.241 [2024-12-10 11:30:04.959469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.241 [2024-12-10 11:30:04.959571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.241 [2024-12-10 11:30:04.959597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.241 [2024-12-10 11:30:04.981785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.241 [2024-12-10 11:30:04.982019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.241 [2024-12-10 11:30:04.982055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.241 [2024-12-10 11:30:05.004012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.241 [2024-12-10 11:30:05.004128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.241 [2024-12-10 11:30:05.004153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.241 [2024-12-10 11:30:05.026670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.241 [2024-12-10 11:30:05.026756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.241 [2024-12-10 11:30:05.026789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.241 [2024-12-10 11:30:05.049836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.241 [2024-12-10 11:30:05.050129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.241 [2024-12-10 11:30:05.050159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.499 [2024-12-10 11:30:05.072778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.499 [2024-12-10 11:30:05.072866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.499 [2024-12-10 11:30:05.072894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.499 [2024-12-10 11:30:05.095739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.499 [2024-12-10 11:30:05.095837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.499 [2024-12-10 11:30:05.095867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.499 [2024-12-10 11:30:05.118302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.499 [2024-12-10 11:30:05.118398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.499 [2024-12-10 11:30:05.118426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.499 [2024-12-10 11:30:05.141003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.499 [2024-12-10 11:30:05.141137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.499 [2024-12-10 11:30:05.141166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.499 [2024-12-10 11:30:05.165059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.499 [2024-12-10 11:30:05.165137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.499 [2024-12-10 11:30:05.165169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.499 [2024-12-10 11:30:05.187553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.499 [2024-12-10 11:30:05.187643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.499 [2024-12-10 11:30:05.187667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.499 [2024-12-10 11:30:05.210042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.499 [2024-12-10 11:30:05.210127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.499 [2024-12-10 11:30:05.210154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.499 [2024-12-10 11:30:05.232355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.499 [2024-12-10 11:30:05.232449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.499 [2024-12-10 11:30:05.232473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.499 [2024-12-10 11:30:05.254963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.499 [2024-12-10 11:30:05.255197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.499 [2024-12-10 11:30:05.255232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.499 [2024-12-10 11:30:05.276812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.499 [2024-12-10 11:30:05.277040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.499 [2024-12-10 11:30:05.277068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.499 11133.00 IOPS, 43.49 MiB/s [2024-12-10T11:30:05.325Z] [2024-12-10 11:30:05.298523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.499 [2024-12-10 11:30:05.298740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.499 [2024-12-10 11:30:05.298774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.499 [2024-12-10 11:30:05.320705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.499 [2024-12-10 11:30:05.320918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.499 [2024-12-10 11:30:05.320947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.757 [2024-12-10 11:30:05.342874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.757 [2024-12-10 11:30:05.342928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.757 [2024-12-10 11:30:05.342984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.757 [2024-12-10 11:30:05.365186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.757 [2024-12-10 11:30:05.365438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.757 [2024-12-10 11:30:05.365467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.757 [2024-12-10 11:30:05.387196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.758 [2024-12-10 11:30:05.387266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.758 [2024-12-10 11:30:05.387299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.758 [2024-12-10 11:30:05.409322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.758 [2024-12-10 11:30:05.409425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.758 [2024-12-10 11:30:05.409450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.758 [2024-12-10 11:30:05.430419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.758 [2024-12-10 11:30:05.430630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.758 [2024-12-10 11:30:05.430664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.758 [2024-12-10 11:30:05.451534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.758 [2024-12-10 11:30:05.451608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.758 [2024-12-10 11:30:05.451629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.758 [2024-12-10 11:30:05.472475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.758 [2024-12-10 11:30:05.472734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.758 [2024-12-10 11:30:05.472768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.758 [2024-12-10 11:30:05.494204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.758 [2024-12-10 11:30:05.494263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.758 [2024-12-10 11:30:05.494284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.758 [2024-12-10 11:30:05.515923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.758 [2024-12-10 11:30:05.516136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.758 [2024-12-10 11:30:05.516170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.758 [2024-12-10 11:30:05.537757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.758 [2024-12-10 11:30:05.537833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.758 [2024-12-10 11:30:05.537855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.758 [2024-12-10 11:30:05.559063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.758 [2024-12-10 11:30:05.559130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.758 [2024-12-10 11:30:05.559155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.758 [2024-12-10 11:30:05.582284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:58.758 [2024-12-10 11:30:05.582368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.758 [2024-12-10 11:30:05.582394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.034 [2024-12-10 11:30:05.605406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.034 [2024-12-10 11:30:05.605465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.034 [2024-12-10 11:30:05.605491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.034 [2024-12-10 11:30:05.627835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.034 [2024-12-10 11:30:05.628036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.034 [2024-12-10 11:30:05.628064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.034 [2024-12-10 11:30:05.650452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.034 [2024-12-10 11:30:05.650696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.034 [2024-12-10 11:30:05.650731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.034 [2024-12-10 11:30:05.673050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.034 [2024-12-10 11:30:05.673244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.034 [2024-12-10 11:30:05.673272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.034 [2024-12-10 11:30:05.695717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.034 [2024-12-10 11:30:05.695900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.034 [2024-12-10 11:30:05.695936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.034 [2024-12-10 11:30:05.717899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.034 [2024-12-10 11:30:05.717972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.034 [2024-12-10 11:30:05.717997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.034 [2024-12-10 11:30:05.750054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.034 [2024-12-10 11:30:05.750136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.034 [2024-12-10 11:30:05.750158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.034 [2024-12-10 11:30:05.772644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.034 [2024-12-10 11:30:05.772849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.034 [2024-12-10 11:30:05.772885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.034 [2024-12-10 11:30:05.795395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.034 [2024-12-10 11:30:05.795481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.034 [2024-12-10 11:30:05.795503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.034 [2024-12-10 11:30:05.817074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.034 [2024-12-10 11:30:05.817126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.034 [2024-12-10 11:30:05.817170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.034 [2024-12-10 11:30:05.838957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.034 [2024-12-10 11:30:05.839050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.034 [2024-12-10 11:30:05.839074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.304 [2024-12-10 11:30:05.861203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.304 [2024-12-10 11:30:05.861290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.304 [2024-12-10 11:30:05.861315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.304 [2024-12-10 11:30:05.883079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.304 [2024-12-10 11:30:05.883153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.304 [2024-12-10 11:30:05.883175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.304 [2024-12-10 11:30:05.904321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.304 [2024-12-10 11:30:05.904416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.304 [2024-12-10 11:30:05.904455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.304 [2024-12-10 11:30:05.926123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.304 [2024-12-10 11:30:05.926198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.304 [2024-12-10 11:30:05.926220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.304 [2024-12-10 11:30:05.948217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.304 [2024-12-10 11:30:05.948275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.304 [2024-12-10 11:30:05.948302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.304 [2024-12-10 11:30:05.971406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.304 [2024-12-10 11:30:05.971493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.304 [2024-12-10 11:30:05.971516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.304 [2024-12-10 11:30:05.994027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.304 [2024-12-10 11:30:05.994080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.304 [2024-12-10 11:30:05.994121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.304 [2024-12-10 11:30:06.017129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.304 [2024-12-10 11:30:06.017196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.304 [2024-12-10 11:30:06.017219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.304 [2024-12-10 11:30:06.040227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.304 [2024-12-10 11:30:06.040285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.304 [2024-12-10 11:30:06.040327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.304 [2024-12-10 11:30:06.063634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.304 [2024-12-10 11:30:06.063708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.304 [2024-12-10 11:30:06.063733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.304 [2024-12-10 11:30:06.086231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.304 [2024-12-10 11:30:06.086295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.304 [2024-12-10 11:30:06.086323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.304 [2024-12-10 11:30:06.108452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.304 [2024-12-10 11:30:06.108549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.304 [2024-12-10 11:30:06.108572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.563 [2024-12-10 11:30:06.131190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.563 [2024-12-10 11:30:06.131315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.563 [2024-12-10 11:30:06.131373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.563 [2024-12-10 11:30:06.153897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.563 [2024-12-10 11:30:06.154006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.563 [2024-12-10 11:30:06.154030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.563 [2024-12-10 11:30:06.176331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.563 [2024-12-10 11:30:06.176462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.563 [2024-12-10 11:30:06.176495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.563 [2024-12-10 11:30:06.198426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.563 [2024-12-10 11:30:06.198811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.563 [2024-12-10 11:30:06.198842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.563 [2024-12-10 11:30:06.220860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.563 [2024-12-10 11:30:06.221070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.563 [2024-12-10 11:30:06.221105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.563 [2024-12-10 11:30:06.243274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.563 [2024-12-10 11:30:06.243562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.563 [2024-12-10 11:30:06.243590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.563 [2024-12-10 11:30:06.266751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.563 [2024-12-10 11:30:06.266819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.563 [2024-12-10 11:30:06.266842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.563 [2024-12-10 11:30:06.290198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:59.563 [2024-12-10 11:30:06.290260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.563 [2024-12-10 11:30:06.290282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.563 11259.00 IOPS, 43.98 MiB/s 00:26:59.563 Latency(us) 00:26:59.563 [2024-12-10T11:30:06.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.563 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:59.563 nvme0n1 : 2.01 11266.01 44.01 0.00 0.00 11351.77 10068.71 43849.54 00:26:59.563 [2024-12-10T11:30:06.389Z] =================================================================================================================== 00:26:59.563 [2024-12-10T11:30:06.389Z] Total : 11266.01 44.01 0.00 0.00 11351.77 10068.71 43849.54 00:26:59.563 { 00:26:59.563 "results": [ 00:26:59.563 { 00:26:59.563 "job": "nvme0n1", 00:26:59.563 "core_mask": "0x2", 00:26:59.563 "workload": "randread", 00:26:59.563 "status": "finished", 00:26:59.563 "queue_depth": 128, 00:26:59.563 "io_size": 4096, 00:26:59.563 "runtime": 2.010118, 00:26:59.563 "iops": 11266.005279292061, 00:26:59.563 "mibps": 44.007833122234615, 00:26:59.563 "io_failed": 0, 00:26:59.563 "io_timeout": 0, 00:26:59.563 "avg_latency_us": 11351.770676900596, 00:26:59.563 "min_latency_us": 10068.712727272727, 00:26:59.563 "max_latency_us": 43849.54181818182 00:26:59.563 } 00:26:59.563 ], 00:26:59.563 "core_count": 1 00:26:59.563 } 00:26:59.564 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:59.564 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:59.564 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:59.564 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:59.564 | .driver_specific 00:26:59.564 | .nvme_error 00:26:59.564 | .status_code 00:26:59.564 | .command_transient_transport_error' 00:27:00.131 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 88 > 0 )) 00:27:00.131 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87392 00:27:00.131 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 87392 ']' 00:27:00.131 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 87392 00:27:00.131 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:00.131 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:00.131 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87392 00:27:00.131 killing process with pid 87392 00:27:00.131 Received shutdown signal, test time was about 2.000000 seconds 00:27:00.131 00:27:00.131 Latency(us) 00:27:00.131 [2024-12-10T11:30:06.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.131 [2024-12-10T11:30:06.957Z] =================================================================================================================== 00:27:00.131 [2024-12-10T11:30:06.957Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:00.131 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:00.131 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:00.131 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87392' 00:27:00.131 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 87392 00:27:00.131 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 87392 00:27:01.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:01.068 11:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:01.068 11:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:01.068 11:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:01.068 11:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:01.068 11:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:01.068 11:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87465 00:27:01.068 11:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87465 /var/tmp/bperf.sock 00:27:01.068 11:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:01.068 11:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 87465 ']' 00:27:01.068 11:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:01.068 11:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.068 11:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:01.068 11:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.068 11:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.068 [2024-12-10 11:30:07.884434] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:27:01.068 [2024-12-10 11:30:07.885571] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87465 ] 00:27:01.068 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:01.068 Zero copy mechanism will not be used. 00:27:01.326 [2024-12-10 11:30:08.069027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.584 [2024-12-10 11:30:08.174671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.584 [2024-12-10 11:30:08.358960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:02.151 11:30:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.151 11:30:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:02.151 11:30:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:02.151 11:30:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:02.410 11:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:02.410 11:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.410 11:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.410 11:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.410 11:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:02.410 11:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:02.668 nvme0n1 00:27:02.668 11:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:02.668 11:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.668 11:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.927 11:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.927 11:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:02.927 11:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:02.927 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:02.927 Zero copy mechanism will not be used. 00:27:02.927 Running I/O for 2 seconds... 00:27:02.927 [2024-12-10 11:30:09.621539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.622254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.622425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.628406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.628713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.628765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.634503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.634554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.634614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.640326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.640577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.640612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.646306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.646394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.646419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.652064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.652292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.652321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.657957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.658026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.658054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.663682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.663899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.663935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.669531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.669602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.669629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.675138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.675379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.675410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.681057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.681133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.681156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.686904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.687155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.687191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.692964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.693032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.693075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.698837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.698915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.698953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.704734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.704816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.704840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.710557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.927 [2024-12-10 11:30:09.710647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.927 [2024-12-10 11:30:09.710685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.927 [2024-12-10 11:30:09.716367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.928 [2024-12-10 11:30:09.716463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.928 [2024-12-10 11:30:09.716491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.928 [2024-12-10 11:30:09.722160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.928 [2024-12-10 11:30:09.722368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.928 [2024-12-10 11:30:09.722405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.928 [2024-12-10 11:30:09.728167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.928 [2024-12-10 11:30:09.728247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.928 [2024-12-10 11:30:09.728271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:02.928 [2024-12-10 11:30:09.733791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.928 [2024-12-10 11:30:09.734011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.928 [2024-12-10 11:30:09.734039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.928 [2024-12-10 11:30:09.739746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.928 [2024-12-10 11:30:09.739806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.928 [2024-12-10 11:30:09.739830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:02.928 [2024-12-10 11:30:09.745532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.928 [2024-12-10 11:30:09.745601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.928 [2024-12-10 11:30:09.745628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:02.928 [2024-12-10 11:30:09.751381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:02.928 [2024-12-10 11:30:09.751481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.928 [2024-12-10 11:30:09.751512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.757190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.757435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.757465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.763056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.763287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.763458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.769266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.769512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.769673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.775528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.775740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.775964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.781802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.782064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.782279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.788015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.788224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.788403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.794222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.794455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.794608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.800629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.800848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.801008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.806863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.807104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.807270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.813235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.813477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.813640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.819626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.819853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.820005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.825820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.825882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.825907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.831511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.831604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.831628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.837325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.837423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.837463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.842938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.843183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.843219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.849018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.849229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.849413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.855016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.855249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.855413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.861060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.861314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.861489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.867224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.867442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.867615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.873511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.873722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.873880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.879759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.879972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.880184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.885907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.886154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.886301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.892102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.892323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.892373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.188 [2024-12-10 11:30:09.898007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.188 [2024-12-10 11:30:09.898243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.188 [2024-12-10 11:30:09.898498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.904428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.904676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.904834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.910730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.910809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.910832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.916501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.916578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.916602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.922393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.922495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.922518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.928304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.928547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.928583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.934417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.934499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.934526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.940311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.940568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.940598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.946271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.946339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.946384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.952057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.952251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.952280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.957941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.958027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.958055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.963819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.964011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.964046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.969756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.969828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.969863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.975473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.975540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.975564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.981134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.981205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.981233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.986962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.987178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.987214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.993077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.993287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.993472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:09.999305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:09.999534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:09.999688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:10.005450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:10.005663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.189 [2024-12-10 11:30:10.005881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.189 [2024-12-10 11:30:10.011578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.189 [2024-12-10 11:30:10.011789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.448 [2024-12-10 11:30:10.011972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.448 [2024-12-10 11:30:10.017761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.448 [2024-12-10 11:30:10.017966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.448 [2024-12-10 11:30:10.018133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.448 [2024-12-10 11:30:10.023937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.448 [2024-12-10 11:30:10.024152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.448 [2024-12-10 11:30:10.024308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.448 [2024-12-10 11:30:10.030061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.448 [2024-12-10 11:30:10.030278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.448 [2024-12-10 11:30:10.030447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.448 [2024-12-10 11:30:10.036581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.448 [2024-12-10 11:30:10.036673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.448 [2024-12-10 11:30:10.036707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.448 [2024-12-10 11:30:10.042590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.448 [2024-12-10 11:30:10.042814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.043039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.048981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.049175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.049214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.054904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.054960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.054987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.060703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.060897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.060932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.066602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.066667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.066691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.072534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.072591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.072618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.078294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.078425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.078453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.084059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.084254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.084290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.089916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.089976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.089999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.095701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.095901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.095934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.101518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.101566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.101612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.107273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.107497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.107532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.113200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.113257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.113280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.119017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.119221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.119249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.124901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.124958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.124981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.130449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.130498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.130528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.136189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.136416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.136586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.142495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.142731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.142893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.148859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.148916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.148939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.154920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.155013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.155036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.160799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.160850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.160876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.166656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.166707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.166734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.172309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.172409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.172434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.178155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.178388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.178418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.183941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.183996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.184019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.189690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.189900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.449 [2024-12-10 11:30:10.189936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.449 [2024-12-10 11:30:10.195655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.449 [2024-12-10 11:30:10.195864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.450 [2024-12-10 11:30:10.196024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.450 [2024-12-10 11:30:10.202023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.450 [2024-12-10 11:30:10.202226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.450 [2024-12-10 11:30:10.202501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.450 [2024-12-10 11:30:10.208467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.450 [2024-12-10 11:30:10.208700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.450 [2024-12-10 11:30:10.208919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.450 [2024-12-10 11:30:10.214731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.450 [2024-12-10 11:30:10.214937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.450 [2024-12-10 11:30:10.215107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.450 [2024-12-10 11:30:10.220898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.450 [2024-12-10 11:30:10.221105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.450 [2024-12-10 11:30:10.221140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.450 [2024-12-10 11:30:10.226798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.450 [2024-12-10 11:30:10.226848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.450 [2024-12-10 11:30:10.226875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.450 [2024-12-10 11:30:10.232626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.450 [2024-12-10 11:30:10.232854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.450 [2024-12-10 11:30:10.233010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.450 [2024-12-10 11:30:10.238770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.450 [2024-12-10 11:30:10.238978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.450 [2024-12-10 11:30:10.239129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.450 [2024-12-10 11:30:10.244909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.450 [2024-12-10 11:30:10.245155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.450 [2024-12-10 11:30:10.245382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.450 [2024-12-10 11:30:10.251146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.450 [2024-12-10 11:30:10.251364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.450 [2024-12-10 11:30:10.251596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.450 [2024-12-10 11:30:10.257298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.450 [2024-12-10 11:30:10.257540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.450 [2024-12-10 11:30:10.257704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.450 [2024-12-10 11:30:10.263874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.450 [2024-12-10 11:30:10.264082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.450 [2024-12-10 11:30:10.264294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.450 [2024-12-10 11:30:10.270133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.450 [2024-12-10 11:30:10.270341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.450 [2024-12-10 11:30:10.270536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.276412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.276616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.276781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.282595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.282814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.282988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.288799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.289038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.289259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.295232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.295289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.295315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.300823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.300890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.300915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.306678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.306740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.306764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.312465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.312534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.312607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.318332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.318430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.318459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.323999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.324230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.324259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.330063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.330139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.330163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.335905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.335970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.335993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.341671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.341741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.341771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.347440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.347508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.347532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.353160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.353211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.353248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.358911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.359161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.359189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.364955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.365024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.365047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.370644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.370714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.370737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.376346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.376450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.376475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.381916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.382147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.382175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.387859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.387916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.387938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.393594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.393664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.393687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.399436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.399524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.399547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.405172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.405400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.405429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.411081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.411138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.411161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.417088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.417291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.417320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.423218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.710 [2024-12-10 11:30:10.423283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.710 [2024-12-10 11:30:10.423305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.710 [2024-12-10 11:30:10.429121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.429320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.429369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.435042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.435098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.435120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.440839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.441069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.441097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.446923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.447013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.447037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.453114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.453170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.453192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.459117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.459171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.459191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.464889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.464987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.465008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.470733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.470803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.470825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.476337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.476556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.476585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.482281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.482522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.482682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.488601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.488802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.488962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.494807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.495010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.495221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.501070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.501298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.501467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.507216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.507464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.507616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.513280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.513516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.513674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.519524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.519764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.519977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.525697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.525898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.526067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.711 [2024-12-10 11:30:10.531886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.711 [2024-12-10 11:30:10.532090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.711 [2024-12-10 11:30:10.532251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.971 [2024-12-10 11:30:10.537866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.537952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.971 [2024-12-10 11:30:10.537990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.971 [2024-12-10 11:30:10.543546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.543602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.971 [2024-12-10 11:30:10.543625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.971 [2024-12-10 11:30:10.549157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.549212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.971 [2024-12-10 11:30:10.549234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.971 [2024-12-10 11:30:10.554849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.555109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.971 [2024-12-10 11:30:10.555137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.971 [2024-12-10 11:30:10.560857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.560925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.971 [2024-12-10 11:30:10.560947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.971 [2024-12-10 11:30:10.566641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.566715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.971 [2024-12-10 11:30:10.566737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.971 [2024-12-10 11:30:10.572302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.572403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.971 [2024-12-10 11:30:10.572428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.971 [2024-12-10 11:30:10.578170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.578386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.971 [2024-12-10 11:30:10.578415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.971 [2024-12-10 11:30:10.584184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.584237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.971 [2024-12-10 11:30:10.584259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.971 [2024-12-10 11:30:10.590046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.590243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.971 [2024-12-10 11:30:10.590271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.971 [2024-12-10 11:30:10.595958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.596029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.971 [2024-12-10 11:30:10.596051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.971 [2024-12-10 11:30:10.601657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.601712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.971 [2024-12-10 11:30:10.601734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.971 [2024-12-10 11:30:10.607363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.607446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.971 [2024-12-10 11:30:10.607470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.971 [2024-12-10 11:30:10.613064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.613290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.971 [2024-12-10 11:30:10.613318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.971 5177.00 IOPS, 647.12 MiB/s [2024-12-10T11:30:10.797Z] [2024-12-10 11:30:10.621091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.621326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.971 [2024-12-10 11:30:10.621551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.971 [2024-12-10 11:30:10.627246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.971 [2024-12-10 11:30:10.627478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.627644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.633268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.633500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.633669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.639383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.639596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.639777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.645603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.645839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.646055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.651779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.651957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.651984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.657546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.657615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.657654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.663140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.663375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.663404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.669055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.669127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.669149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.674650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.674873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.674901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.680486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.680554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.680607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.686233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.686485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.686514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.692075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.692158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.692180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.697768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.697977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.698007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.703521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.703588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.703643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.709274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.709512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.709541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.715011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.715079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.715099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.720737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.720921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.720949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.726601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.726669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.726691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.732142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.732376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.732405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.737999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.738068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.738090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.743733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.743918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.743947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.749673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.749725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.749763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.755189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.755423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.755451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.761078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.761129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.761166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.766698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.766896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.766925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.772703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.772772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.772793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.778419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.778487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.778508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.972 [2024-12-10 11:30:10.784184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.972 [2024-12-10 11:30:10.784254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.972 [2024-12-10 11:30:10.784275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.973 [2024-12-10 11:30:10.789823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:03.973 [2024-12-10 11:30:10.790055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.973 [2024-12-10 11:30:10.790082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.796078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.796133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.796156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.801974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.802043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.802066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.807745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.807801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.807823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.813561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.813616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.813638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.819311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.819406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.819430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.824959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.825183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.825211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.831029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.831098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.831120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.836965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.837181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.837210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.843288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.843356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.843413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.849104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.849326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.849353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.855053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.855105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.855143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.860850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.861037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.861065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.866722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.866779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.866801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.872332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.872611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.872640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.878107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.878175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.878196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.883825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.884010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.884038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.889773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.889830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.889852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.895314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.895553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.895582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.901300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.901396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.901434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.906950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.907161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.907189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.912815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.234 [2024-12-10 11:30:10.912870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.234 [2024-12-10 11:30:10.912893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.234 [2024-12-10 11:30:10.918516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:10.918566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:10.918606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:10.924268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:10.924335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:10.924356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:10.930045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:10.930283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:10.930312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:10.936000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:10.936113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:10.936151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:10.941717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:10.941942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:10.941971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:10.947726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:10.947782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:10.947804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:10.953447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:10.953525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:10.953546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:10.959156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:10.959224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:10.959245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:10.964903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:10.965170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:10.965199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:10.970665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:10.970912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:10.971123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:10.976553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:10.976802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:10.977029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:10.982527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:10.982768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:10.982912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:10.988393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:10.988662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:10.988811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:10.994266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:10.994535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:10.994746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:11.000176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:11.000436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:11.000583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:11.005973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:11.006217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:11.006384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:11.012081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:11.012166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:11.012188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:11.017632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:11.017701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:11.017723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:11.023144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:11.023416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:11.023448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:11.028918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:11.029003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:11.029041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:11.034437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:11.034509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:11.034531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:11.039911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:11.039970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:11.039994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:11.045705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:11.045985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:11.046015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.235 [2024-12-10 11:30:11.051754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.235 [2024-12-10 11:30:11.051990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.235 [2024-12-10 11:30:11.052199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.495 [2024-12-10 11:30:11.057917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.495 [2024-12-10 11:30:11.058119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.495 [2024-12-10 11:30:11.058275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.495 [2024-12-10 11:30:11.064308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.495 [2024-12-10 11:30:11.064543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.495 [2024-12-10 11:30:11.064728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.495 [2024-12-10 11:30:11.070514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.495 [2024-12-10 11:30:11.070735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.495 [2024-12-10 11:30:11.071005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.495 [2024-12-10 11:30:11.076827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.495 [2024-12-10 11:30:11.077057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.495 [2024-12-10 11:30:11.077218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.495 [2024-12-10 11:30:11.082951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.495 [2024-12-10 11:30:11.083182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.495 [2024-12-10 11:30:11.083450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.495 [2024-12-10 11:30:11.089178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.495 [2024-12-10 11:30:11.089415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.495 [2024-12-10 11:30:11.089611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.495 [2024-12-10 11:30:11.095195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.495 [2024-12-10 11:30:11.095439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.495 [2024-12-10 11:30:11.095467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.495 [2024-12-10 11:30:11.100932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.495 [2024-12-10 11:30:11.101165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.495 [2024-12-10 11:30:11.101386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.495 [2024-12-10 11:30:11.107038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.495 [2024-12-10 11:30:11.107268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.495 [2024-12-10 11:30:11.107485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.495 [2024-12-10 11:30:11.113246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.495 [2024-12-10 11:30:11.113481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.495 [2024-12-10 11:30:11.113691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.495 [2024-12-10 11:30:11.119328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.495 [2024-12-10 11:30:11.119590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.495 [2024-12-10 11:30:11.119758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.495 [2024-12-10 11:30:11.125656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.495 [2024-12-10 11:30:11.125879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.495 [2024-12-10 11:30:11.126041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.495 [2024-12-10 11:30:11.131734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.495 [2024-12-10 11:30:11.131935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.495 [2024-12-10 11:30:11.132191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.137934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.138156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.138395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.143977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.144034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.144072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.149844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.149900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.149923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.155464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.155518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.155540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.161234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.161447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.161475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.167550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.167637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.167659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.173506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.173558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.173613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.179529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.179581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.179603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.185177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.185412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.185440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.190835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.190903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.190924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.196250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.196319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.196342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.201808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.201876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.201898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.207269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.207337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.207359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.212709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.212926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.212953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.218198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.218266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.218288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.223738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.223923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.223951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.229409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.229633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.229763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.235222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.235439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.235468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.241083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.241152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.241173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.246533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.246599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.246621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.252169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.252238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.252260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.257819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.258043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.258070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.263460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.263526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.263548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.268971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.269194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.269223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.274641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.274709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.274746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.280225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.280295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.280317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.285750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.285819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.496 [2024-12-10 11:30:11.285840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.496 [2024-12-10 11:30:11.291036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.496 [2024-12-10 11:30:11.291104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.497 [2024-12-10 11:30:11.291125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.497 [2024-12-10 11:30:11.296679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.497 [2024-12-10 11:30:11.296746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.497 [2024-12-10 11:30:11.296768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.497 [2024-12-10 11:30:11.302028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.497 [2024-12-10 11:30:11.302096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.497 [2024-12-10 11:30:11.302117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.497 [2024-12-10 11:30:11.307340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.497 [2024-12-10 11:30:11.307437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.497 [2024-12-10 11:30:11.307460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.497 [2024-12-10 11:30:11.312755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.497 [2024-12-10 11:30:11.312842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.497 [2024-12-10 11:30:11.312865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.497 [2024-12-10 11:30:11.318509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.497 [2024-12-10 11:30:11.318563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.497 [2024-12-10 11:30:11.318585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.324115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.324171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.324194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.329576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.329642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.329663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.335003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.335071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.335093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.340461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.340527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.340548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.345931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.345999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.346021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.351628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.351844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.351873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.357424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.357492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.357513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.362848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.363070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.363099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.368755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.368825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.368848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.374435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.374490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.374512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.379777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.379834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.379856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.385263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.385330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.385350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.390648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.390715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.390738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.395912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.395982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.396004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.401276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.401342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.401380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.406634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.406702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.406723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.412496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.412551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.412574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.418180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.418388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.418417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.423935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.423990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.424013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.429771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.429967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.429995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.435481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.435535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.756 [2024-12-10 11:30:11.435558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.756 [2024-12-10 11:30:11.441269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.756 [2024-12-10 11:30:11.441477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.441506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.447294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.447389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.447413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.452945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.453172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.453200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.458743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.458799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.458821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.464204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.464405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.464434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.469889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.469942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.469980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.475439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.475495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.475517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.480826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.480894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.480915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.486348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.486441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.486462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.491889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.491944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.491967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.497462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.497528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.497549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.502849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.502915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.502936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.508359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.508456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.508479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.513922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.513974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.514012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.519393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.519460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.519482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.524806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.525023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.525051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.530527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.530596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.530617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.536080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.536318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.536358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.541748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.541816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.541838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.547263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.547330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.547351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.552642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.552710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.552732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.558063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.558131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.558153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.563590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.563656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.563703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.568955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.569021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.569058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.574478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.574542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.574580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.757 [2024-12-10 11:30:11.580027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.757 [2024-12-10 11:30:11.580111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.757 [2024-12-10 11:30:11.580132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.016 [2024-12-10 11:30:11.585633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.016 [2024-12-10 11:30:11.585687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.016 [2024-12-10 11:30:11.585710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.016 [2024-12-10 11:30:11.591179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.016 [2024-12-10 11:30:11.591246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.016 [2024-12-10 11:30:11.591283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.016 [2024-12-10 11:30:11.596644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.016 [2024-12-10 11:30:11.596712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.016 [2024-12-10 11:30:11.596750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.016 [2024-12-10 11:30:11.601941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.016 [2024-12-10 11:30:11.602022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.016 [2024-12-10 11:30:11.602060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.016 [2024-12-10 11:30:11.607349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.016 [2024-12-10 11:30:11.607426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.016 [2024-12-10 11:30:11.607464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.016 [2024-12-10 11:30:11.612989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.016 [2024-12-10 11:30:11.613072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.016 [2024-12-10 11:30:11.613094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.016 5285.50 IOPS, 660.69 MiB/s [2024-12-10T11:30:11.842Z] [2024-12-10 11:30:11.620527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.016 [2024-12-10 11:30:11.620609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.016 [2024-12-10 11:30:11.620631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.016 00:27:05.016 Latency(us) 00:27:05.016 [2024-12-10T11:30:11.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.016 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:05.016 nvme0n1 : 2.00 5284.01 660.50 0.00 0.00 3022.98 2487.39 8638.84 00:27:05.016 [2024-12-10T11:30:11.842Z] =================================================================================================================== 00:27:05.016 [2024-12-10T11:30:11.842Z] Total : 5284.01 660.50 0.00 0.00 3022.98 2487.39 8638.84 00:27:05.016 { 00:27:05.016 "results": [ 00:27:05.016 { 00:27:05.016 "job": "nvme0n1", 00:27:05.016 "core_mask": "0x2", 00:27:05.016 "workload": "randread", 00:27:05.016 "status": "finished", 00:27:05.016 "queue_depth": 16, 00:27:05.016 "io_size": 131072, 00:27:05.016 "runtime": 2.003593, 00:27:05.016 "iops": 5284.007280919827, 00:27:05.016 "mibps": 660.5009101149784, 00:27:05.016 "io_failed": 0, 00:27:05.016 "io_timeout": 0, 00:27:05.016 "avg_latency_us": 3022.9779245558443, 00:27:05.016 "min_latency_us": 2487.389090909091, 00:27:05.016 "max_latency_us": 8638.836363636363 00:27:05.016 } 00:27:05.016 ], 00:27:05.016 "core_count": 1 00:27:05.016 } 00:27:05.016 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:05.016 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:05.016 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:05.016 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:05.016 | .driver_specific 00:27:05.016 | .nvme_error 00:27:05.017 | .status_code 00:27:05.017 | .command_transient_transport_error' 00:27:05.275 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 342 > 0 )) 00:27:05.275 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87465 00:27:05.275 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 87465 ']' 00:27:05.275 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 87465 00:27:05.275 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:05.275 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.275 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87465 00:27:05.275 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:05.275 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:05.275 killing process with pid 87465 00:27:05.275 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87465' 00:27:05.275 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 87465 00:27:05.275 Received shutdown signal, test time was about 2.000000 seconds 00:27:05.275 00:27:05.275 Latency(us) 00:27:05.275 [2024-12-10T11:30:12.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.275 [2024-12-10T11:30:12.101Z] =================================================================================================================== 00:27:05.275 [2024-12-10T11:30:12.101Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:05.275 11:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 87465 00:27:06.211 11:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:06.211 11:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:06.211 11:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:06.211 11:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:06.211 11:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:06.211 11:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87531 00:27:06.211 11:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87531 /var/tmp/bperf.sock 00:27:06.211 11:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:06.211 11:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 87531 ']' 00:27:06.211 11:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:06.211 11:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:06.211 11:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:06.211 11:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.211 11:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.470 [2024-12-10 11:30:13.073492] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:27:06.470 [2024-12-10 11:30:13.073630] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87531 ] 00:27:06.470 [2024-12-10 11:30:13.248421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.728 [2024-12-10 11:30:13.347873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.728 [2024-12-10 11:30:13.516136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:07.295 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.295 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:07.295 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.295 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.554 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:07.554 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.554 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.554 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.554 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:07.554 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.121 nvme0n1 00:27:08.121 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:08.121 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.121 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.121 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.121 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:08.121 11:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:08.121 Running I/O for 2 seconds... 00:27:08.121 [2024-12-10 11:30:14.903734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb048 00:27:08.121 [2024-12-10 11:30:14.905664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.121 [2024-12-10 11:30:14.905723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.121 [2024-12-10 11:30:14.924834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb8b8 00:27:08.121 [2024-12-10 11:30:14.926653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.121 [2024-12-10 11:30:14.926716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.121 [2024-12-10 11:30:14.944510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc128 00:27:08.379 [2024-12-10 11:30:14.946432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.379 [2024-12-10 11:30:14.946535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:08.379 [2024-12-10 11:30:14.964168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:27:08.379 [2024-12-10 11:30:14.965995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.379 [2024-12-10 11:30:14.966054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:08.379 [2024-12-10 11:30:14.983628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd208 00:27:08.379 [2024-12-10 11:30:14.985440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.379 [2024-12-10 11:30:14.985500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:08.379 [2024-12-10 11:30:15.002916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fda78 00:27:08.379 [2024-12-10 11:30:15.004665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.379 [2024-12-10 11:30:15.004726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:08.379 [2024-12-10 11:30:15.022877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe2e8 00:27:08.379 [2024-12-10 11:30:15.024711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.379 [2024-12-10 11:30:15.024774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:08.379 [2024-12-10 11:30:15.042305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:27:08.379 [2024-12-10 11:30:15.043995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.379 [2024-12-10 11:30:15.044057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:08.379 [2024-12-10 11:30:15.068834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fef90 00:27:08.379 [2024-12-10 11:30:15.071820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.379 [2024-12-10 11:30:15.071873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:08.379 [2024-12-10 11:30:15.087278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:27:08.379 [2024-12-10 11:30:15.090286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.379 [2024-12-10 11:30:15.090345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:08.379 [2024-12-10 11:30:15.106179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe2e8 00:27:08.379 [2024-12-10 11:30:15.109296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.379 [2024-12-10 11:30:15.109338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:08.379 [2024-12-10 11:30:15.125032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fda78 00:27:08.379 [2024-12-10 11:30:15.128116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.379 [2024-12-10 11:30:15.128174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:08.380 [2024-12-10 11:30:15.145653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd208 00:27:08.380 [2024-12-10 11:30:15.148981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.380 [2024-12-10 11:30:15.149043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:08.380 [2024-12-10 11:30:15.166165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:27:08.380 [2024-12-10 11:30:15.169336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.380 [2024-12-10 11:30:15.169388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:08.380 [2024-12-10 11:30:15.186001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc128 00:27:08.380 [2024-12-10 11:30:15.189132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.380 [2024-12-10 11:30:15.189174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:08.638 [2024-12-10 11:30:15.205517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb8b8 00:27:08.638 [2024-12-10 11:30:15.208586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.638 [2024-12-10 11:30:15.208645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:08.638 [2024-12-10 11:30:15.225595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb048 00:27:08.638 [2024-12-10 11:30:15.228505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.638 [2024-12-10 11:30:15.228567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.638 [2024-12-10 11:30:15.245347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa7d8 00:27:08.638 [2024-12-10 11:30:15.248572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.638 [2024-12-10 11:30:15.248633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:08.638 [2024-12-10 11:30:15.266694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9f68 00:27:08.638 [2024-12-10 11:30:15.269799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.638 [2024-12-10 11:30:15.269860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:08.638 [2024-12-10 11:30:15.287552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f96f8 00:27:08.638 [2024-12-10 11:30:15.290505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.638 [2024-12-10 11:30:15.290568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:08.638 [2024-12-10 11:30:15.306964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8e88 00:27:08.638 [2024-12-10 11:30:15.309849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.638 [2024-12-10 11:30:15.309909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:08.638 [2024-12-10 11:30:15.326240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8618 00:27:08.638 [2024-12-10 11:30:15.329127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.638 [2024-12-10 11:30:15.329187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:08.638 [2024-12-10 11:30:15.346006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:27:08.638 [2024-12-10 11:30:15.349076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.638 [2024-12-10 11:30:15.349138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:08.638 [2024-12-10 11:30:15.367159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7538 00:27:08.638 [2024-12-10 11:30:15.370179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.638 [2024-12-10 11:30:15.370227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:08.638 [2024-12-10 11:30:15.388489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6cc8 00:27:08.638 [2024-12-10 11:30:15.391347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.638 [2024-12-10 11:30:15.391423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:08.638 [2024-12-10 11:30:15.409498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6458 00:27:08.638 [2024-12-10 11:30:15.412341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.638 [2024-12-10 11:30:15.412429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:08.638 [2024-12-10 11:30:15.429795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5be8 00:27:08.638 [2024-12-10 11:30:15.432574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.638 [2024-12-10 11:30:15.432636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:08.638 [2024-12-10 11:30:15.449916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5378 00:27:08.639 [2024-12-10 11:30:15.452690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.639 [2024-12-10 11:30:15.452751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:08.897 [2024-12-10 11:30:15.470540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4b08 00:27:08.897 [2024-12-10 11:30:15.473279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.897 [2024-12-10 11:30:15.473327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:08.897 [2024-12-10 11:30:15.490735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:27:08.897 [2024-12-10 11:30:15.493413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.897 [2024-12-10 11:30:15.493476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:08.897 [2024-12-10 11:30:15.510912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3a28 00:27:08.897 [2024-12-10 11:30:15.513613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.897 [2024-12-10 11:30:15.513677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:08.897 [2024-12-10 11:30:15.531518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f31b8 00:27:08.897 [2024-12-10 11:30:15.534261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.897 [2024-12-10 11:30:15.534311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:08.897 [2024-12-10 11:30:15.551947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2948 00:27:08.897 [2024-12-10 11:30:15.554650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.897 [2024-12-10 11:30:15.554712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.897 [2024-12-10 11:30:15.572194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f20d8 00:27:08.897 [2024-12-10 11:30:15.574837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.897 [2024-12-10 11:30:15.574899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:08.897 [2024-12-10 11:30:15.592120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1868 00:27:08.897 [2024-12-10 11:30:15.594738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.897 [2024-12-10 11:30:15.594816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:08.897 [2024-12-10 11:30:15.612119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0ff8 00:27:08.897 [2024-12-10 11:30:15.614684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.897 [2024-12-10 11:30:15.614735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:08.897 [2024-12-10 11:30:15.632127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0788 00:27:08.897 [2024-12-10 11:30:15.634712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.897 [2024-12-10 11:30:15.634788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:08.897 [2024-12-10 11:30:15.652702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eff18 00:27:08.897 [2024-12-10 11:30:15.655183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.897 [2024-12-10 11:30:15.655245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:08.897 [2024-12-10 11:30:15.673217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef6a8 00:27:08.897 [2024-12-10 11:30:15.675703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.897 [2024-12-10 11:30:15.675750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:08.897 [2024-12-10 11:30:15.693646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eee38 00:27:08.897 [2024-12-10 11:30:15.696144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.897 [2024-12-10 11:30:15.696190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:08.897 [2024-12-10 11:30:15.713895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:27:08.897 [2024-12-10 11:30:15.716420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.897 [2024-12-10 11:30:15.716480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:09.156 [2024-12-10 11:30:15.734138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173edd58 00:27:09.156 [2024-12-10 11:30:15.736651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.156 [2024-12-10 11:30:15.736712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:09.156 [2024-12-10 11:30:15.753812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:27:09.156 [2024-12-10 11:30:15.756224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.156 [2024-12-10 11:30:15.756283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:09.156 [2024-12-10 11:30:15.773642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ecc78 00:27:09.156 [2024-12-10 11:30:15.775991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.156 [2024-12-10 11:30:15.776053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:09.156 [2024-12-10 11:30:15.793328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec408 00:27:09.156 [2024-12-10 11:30:15.795652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.156 [2024-12-10 11:30:15.795736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:09.156 [2024-12-10 11:30:15.812952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:27:09.156 [2024-12-10 11:30:15.815282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.156 [2024-12-10 11:30:15.815365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:09.156 [2024-12-10 11:30:15.832767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb328 00:27:09.156 [2024-12-10 11:30:15.835017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.156 [2024-12-10 11:30:15.835078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:09.156 [2024-12-10 11:30:15.853074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaab8 00:27:09.156 [2024-12-10 11:30:15.855366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.156 [2024-12-10 11:30:15.855436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:09.156 [2024-12-10 11:30:15.872747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea248 00:27:09.156 [2024-12-10 11:30:15.874946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.156 [2024-12-10 11:30:15.875024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.156 12525.00 IOPS, 48.93 MiB/s [2024-12-10T11:30:15.982Z] [2024-12-10 11:30:15.894542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e99d8 00:27:09.156 [2024-12-10 11:30:15.896792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.156 [2024-12-10 11:30:15.896853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:09.156 [2024-12-10 11:30:15.914857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:27:09.156 [2024-12-10 11:30:15.917110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.156 [2024-12-10 11:30:15.917157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:09.156 [2024-12-10 11:30:15.935602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:27:09.156 [2024-12-10 11:30:15.937797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.156 [2024-12-10 11:30:15.937856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:09.156 [2024-12-10 11:30:15.956771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8088 00:27:09.156 [2024-12-10 11:30:15.959114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.156 [2024-12-10 11:30:15.959160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:09.156 [2024-12-10 11:30:15.977476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7818 00:27:09.156 [2024-12-10 11:30:15.979658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.156 [2024-12-10 11:30:15.979729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:09.417 [2024-12-10 11:30:15.998025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6fa8 00:27:09.417 [2024-12-10 11:30:16.000217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.417 [2024-12-10 11:30:16.000262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:09.417 [2024-12-10 11:30:16.018795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6738 00:27:09.417 [2024-12-10 11:30:16.020876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.417 [2024-12-10 11:30:16.020923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:09.417 [2024-12-10 11:30:16.040260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5ec8 00:27:09.417 [2024-12-10 11:30:16.042442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.417 [2024-12-10 11:30:16.042522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:09.417 [2024-12-10 11:30:16.062036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5658 00:27:09.417 [2024-12-10 11:30:16.064222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.417 [2024-12-10 11:30:16.064292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:09.417 [2024-12-10 11:30:16.083056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4de8 00:27:09.417 [2024-12-10 11:30:16.085212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.417 [2024-12-10 11:30:16.085284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:09.417 [2024-12-10 11:30:16.103663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4578 00:27:09.417 [2024-12-10 11:30:16.105760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.417 [2024-12-10 11:30:16.105861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:09.417 [2024-12-10 11:30:16.123979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3d08 00:27:09.417 [2024-12-10 11:30:16.126042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.417 [2024-12-10 11:30:16.126111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:09.417 [2024-12-10 11:30:16.143841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3498 00:27:09.417 [2024-12-10 11:30:16.145806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.417 [2024-12-10 11:30:16.145865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:09.417 [2024-12-10 11:30:16.165254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e2c28 00:27:09.417 [2024-12-10 11:30:16.167291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.417 [2024-12-10 11:30:16.167358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:09.417 [2024-12-10 11:30:16.186711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:27:09.417 [2024-12-10 11:30:16.188622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.417 [2024-12-10 11:30:16.188670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:09.417 [2024-12-10 11:30:16.206994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1b48 00:27:09.417 [2024-12-10 11:30:16.208912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.417 [2024-12-10 11:30:16.208991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.417 [2024-12-10 11:30:16.228150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e12d8 00:27:09.417 [2024-12-10 11:30:16.230157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.417 [2024-12-10 11:30:16.230228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:09.719 [2024-12-10 11:30:16.249050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0a68 00:27:09.719 [2024-12-10 11:30:16.250827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.719 [2024-12-10 11:30:16.250876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:09.719 [2024-12-10 11:30:16.269581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e01f8 00:27:09.719 [2024-12-10 11:30:16.271394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.719 [2024-12-10 11:30:16.271466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:09.719 [2024-12-10 11:30:16.290551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df988 00:27:09.719 [2024-12-10 11:30:16.292388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.719 [2024-12-10 11:30:16.292454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:09.719 [2024-12-10 11:30:16.310508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df118 00:27:09.719 [2024-12-10 11:30:16.312355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.719 [2024-12-10 11:30:16.312423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:09.719 [2024-12-10 11:30:16.330156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:27:09.719 [2024-12-10 11:30:16.331889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.719 [2024-12-10 11:30:16.331935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:09.719 [2024-12-10 11:30:16.349949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de038 00:27:09.719 [2024-12-10 11:30:16.351730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.719 [2024-12-10 11:30:16.351796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:09.719 [2024-12-10 11:30:16.377614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de038 00:27:09.719 [2024-12-10 11:30:16.380908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.719 [2024-12-10 11:30:16.380982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:09.719 [2024-12-10 11:30:16.396911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:27:09.719 [2024-12-10 11:30:16.399973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.720 [2024-12-10 11:30:16.400039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:09.720 [2024-12-10 11:30:16.417172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df118 00:27:09.720 [2024-12-10 11:30:16.420378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.720 [2024-12-10 11:30:16.420451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:09.720 [2024-12-10 11:30:16.437220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df988 00:27:09.720 [2024-12-10 11:30:16.440527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.720 [2024-12-10 11:30:16.440608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:09.720 [2024-12-10 11:30:16.457084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e01f8 00:27:09.720 [2024-12-10 11:30:16.460084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.720 [2024-12-10 11:30:16.460167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:09.720 [2024-12-10 11:30:16.476627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0a68 00:27:09.720 [2024-12-10 11:30:16.479664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.720 [2024-12-10 11:30:16.479747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:09.720 [2024-12-10 11:30:16.497588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e12d8 00:27:09.720 [2024-12-10 11:30:16.500696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.720 [2024-12-10 11:30:16.500771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:09.720 [2024-12-10 11:30:16.518533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1b48 00:27:09.720 [2024-12-10 11:30:16.521597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.720 [2024-12-10 11:30:16.521677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:09.720 [2024-12-10 11:30:16.537822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:27:09.720 [2024-12-10 11:30:16.540672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.720 [2024-12-10 11:30:16.540736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:09.978 [2024-12-10 11:30:16.556957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e2c28 00:27:09.978 [2024-12-10 11:30:16.559985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.978 [2024-12-10 11:30:16.560036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:09.978 [2024-12-10 11:30:16.577488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3498 00:27:09.978 [2024-12-10 11:30:16.580508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.978 [2024-12-10 11:30:16.580561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:09.978 [2024-12-10 11:30:16.598022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3d08 00:27:09.978 [2024-12-10 11:30:16.601008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.978 [2024-12-10 11:30:16.601059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:09.978 [2024-12-10 11:30:16.618325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4578 00:27:09.978 [2024-12-10 11:30:16.621329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.978 [2024-12-10 11:30:16.621419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:09.978 [2024-12-10 11:30:16.638443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4de8 00:27:09.978 [2024-12-10 11:30:16.641224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.978 [2024-12-10 11:30:16.641289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:09.978 [2024-12-10 11:30:16.657997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5658 00:27:09.978 [2024-12-10 11:30:16.660872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.978 [2024-12-10 11:30:16.660936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:09.978 [2024-12-10 11:30:16.678692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5ec8 00:27:09.978 [2024-12-10 11:30:16.681667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.978 [2024-12-10 11:30:16.681730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:09.978 [2024-12-10 11:30:16.699788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6738 00:27:09.978 [2024-12-10 11:30:16.702632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.978 [2024-12-10 11:30:16.702690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:09.978 [2024-12-10 11:30:16.720799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6fa8 00:27:09.978 [2024-12-10 11:30:16.723668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.978 [2024-12-10 11:30:16.723760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:09.978 [2024-12-10 11:30:16.742021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7818 00:27:09.978 [2024-12-10 11:30:16.744909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.978 [2024-12-10 11:30:16.744972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:09.978 [2024-12-10 11:30:16.762466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8088 00:27:09.978 [2024-12-10 11:30:16.765356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.978 [2024-12-10 11:30:16.765423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:09.978 [2024-12-10 11:30:16.782767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:27:09.978 [2024-12-10 11:30:16.785528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.978 [2024-12-10 11:30:16.785578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:10.237 [2024-12-10 11:30:16.803718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:27:10.237 [2024-12-10 11:30:16.806471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.237 [2024-12-10 11:30:16.806522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:10.237 [2024-12-10 11:30:16.823947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e99d8 00:27:10.237 [2024-12-10 11:30:16.826582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.237 [2024-12-10 11:30:16.826634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:10.237 [2024-12-10 11:30:16.844425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea248 00:27:10.237 [2024-12-10 11:30:16.846983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.237 [2024-12-10 11:30:16.847036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:10.237 [2024-12-10 11:30:16.864715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaab8 00:27:10.237 [2024-12-10 11:30:16.867299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.237 [2024-12-10 11:30:16.867362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:10.237 12461.50 IOPS, 48.68 MiB/s [2024-12-10T11:30:17.063Z] [2024-12-10 11:30:16.887283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb328 00:27:10.237 [2024-12-10 11:30:16.889923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.237 [2024-12-10 11:30:16.889993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:10.237 00:27:10.237 Latency(us) 00:27:10.237 [2024-12-10T11:30:17.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.237 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:10.237 nvme0n1 : 2.01 12441.52 48.60 0.00 0.00 10277.38 8936.73 35508.60 00:27:10.237 [2024-12-10T11:30:17.063Z] =================================================================================================================== 00:27:10.237 [2024-12-10T11:30:17.063Z] Total : 12441.52 48.60 0.00 0.00 10277.38 8936.73 35508.60 00:27:10.237 { 00:27:10.237 "results": [ 00:27:10.237 { 00:27:10.237 "job": "nvme0n1", 00:27:10.237 "core_mask": "0x2", 00:27:10.237 "workload": "randwrite", 00:27:10.237 "status": "finished", 00:27:10.237 "queue_depth": 128, 00:27:10.237 "io_size": 4096, 00:27:10.237 "runtime": 2.0135, 00:27:10.237 "iops": 12441.519741743234, 00:27:10.237 "mibps": 48.59968649118451, 00:27:10.237 "io_failed": 0, 00:27:10.237 "io_timeout": 0, 00:27:10.237 "avg_latency_us": 10277.375492758409, 00:27:10.237 "min_latency_us": 8936.727272727272, 00:27:10.237 "max_latency_us": 35508.59636363637 00:27:10.237 } 00:27:10.237 ], 00:27:10.237 "core_count": 1 00:27:10.237 } 00:27:10.237 11:30:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:10.237 11:30:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:10.237 11:30:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:10.237 | .driver_specific 00:27:10.237 | .nvme_error 00:27:10.237 | .status_code 00:27:10.237 | .command_transient_transport_error' 00:27:10.237 11:30:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:10.496 11:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 98 > 0 )) 00:27:10.496 11:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87531 00:27:10.496 11:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 87531 ']' 00:27:10.496 11:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 87531 00:27:10.496 11:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:10.496 11:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:10.496 11:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87531 00:27:10.496 11:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:10.496 11:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:10.496 killing process with pid 87531 00:27:10.496 11:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87531' 00:27:10.496 11:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 87531 00:27:10.496 Received shutdown signal, test time was about 2.000000 seconds 00:27:10.496 00:27:10.496 Latency(us) 00:27:10.496 [2024-12-10T11:30:17.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.496 [2024-12-10T11:30:17.322Z] =================================================================================================================== 00:27:10.496 [2024-12-10T11:30:17.322Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:10.496 11:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 87531 00:27:11.431 11:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:11.431 11:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:11.431 11:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:11.431 11:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:11.431 11:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:11.431 11:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87600 00:27:11.431 11:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87600 /var/tmp/bperf.sock 00:27:11.431 11:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:11.431 11:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 87600 ']' 00:27:11.431 11:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:11.431 11:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:11.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:11.431 11:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:11.431 11:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:11.431 11:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.690 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:11.690 Zero copy mechanism will not be used. 00:27:11.690 [2024-12-10 11:30:18.334186] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:27:11.690 [2024-12-10 11:30:18.334340] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87600 ] 00:27:11.690 [2024-12-10 11:30:18.512491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.949 [2024-12-10 11:30:18.610497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.207 [2024-12-10 11:30:18.775872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:12.465 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:12.465 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:12.465 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:12.465 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:12.723 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:12.723 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.723 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:12.723 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.723 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.723 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:13.289 nvme0n1 00:27:13.289 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:13.289 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.289 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.289 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.289 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:13.289 11:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:13.289 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:13.289 Zero copy mechanism will not be used. 00:27:13.289 Running I/O for 2 seconds... 00:27:13.289 [2024-12-10 11:30:20.024545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.289 [2024-12-10 11:30:20.024663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.289 [2024-12-10 11:30:20.024707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.289 [2024-12-10 11:30:20.031507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.289 [2024-12-10 11:30:20.031615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.289 [2024-12-10 11:30:20.031658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.289 [2024-12-10 11:30:20.038507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.289 [2024-12-10 11:30:20.038628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.289 [2024-12-10 11:30:20.038663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.289 [2024-12-10 11:30:20.045637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.289 [2024-12-10 11:30:20.045747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.289 [2024-12-10 11:30:20.045781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.289 [2024-12-10 11:30:20.052779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.289 [2024-12-10 11:30:20.052870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.289 [2024-12-10 11:30:20.052914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.289 [2024-12-10 11:30:20.059869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.289 [2024-12-10 11:30:20.059991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.289 [2024-12-10 11:30:20.060026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.289 [2024-12-10 11:30:20.066776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.289 [2024-12-10 11:30:20.066887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.289 [2024-12-10 11:30:20.066922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.289 [2024-12-10 11:30:20.074089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.289 [2024-12-10 11:30:20.074188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.289 [2024-12-10 11:30:20.074234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.289 [2024-12-10 11:30:20.081274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.289 [2024-12-10 11:30:20.081484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.289 [2024-12-10 11:30:20.081517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.289 [2024-12-10 11:30:20.088785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.289 [2024-12-10 11:30:20.088882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.289 [2024-12-10 11:30:20.088954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.289 [2024-12-10 11:30:20.095658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.289 [2024-12-10 11:30:20.095783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.289 [2024-12-10 11:30:20.095826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.289 [2024-12-10 11:30:20.102504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.289 [2024-12-10 11:30:20.102655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.289 [2024-12-10 11:30:20.102688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.289 [2024-12-10 11:30:20.109353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.289 [2024-12-10 11:30:20.109513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.289 [2024-12-10 11:30:20.109554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.547 [2024-12-10 11:30:20.116213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.547 [2024-12-10 11:30:20.116307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.547 [2024-12-10 11:30:20.116364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.547 [2024-12-10 11:30:20.122993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.547 [2024-12-10 11:30:20.123107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.547 [2024-12-10 11:30:20.123140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.547 [2024-12-10 11:30:20.129879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.547 [2024-12-10 11:30:20.130002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.547 [2024-12-10 11:30:20.130044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.547 [2024-12-10 11:30:20.136863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.547 [2024-12-10 11:30:20.136981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.547 [2024-12-10 11:30:20.137024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.547 [2024-12-10 11:30:20.143630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.547 [2024-12-10 11:30:20.143763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.547 [2024-12-10 11:30:20.143797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.547 [2024-12-10 11:30:20.150475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.547 [2024-12-10 11:30:20.150604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.547 [2024-12-10 11:30:20.150645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.547 [2024-12-10 11:30:20.157413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.547 [2024-12-10 11:30:20.157537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.547 [2024-12-10 11:30:20.157611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.547 [2024-12-10 11:30:20.164213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.547 [2024-12-10 11:30:20.164367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.547 [2024-12-10 11:30:20.164415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.547 [2024-12-10 11:30:20.170746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.170891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.170932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.177753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.177859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.177901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.184546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.184686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.184720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.191495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.191606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.191650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.198403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.198556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.198589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.205266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.205407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.205442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.212229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.212326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.212412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.219183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.219309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.219343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.226095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.226219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.226252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.233003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.233134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.233177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.239970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.240081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.240114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.246793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.246921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.246955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.253421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.253527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.253568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.260089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.260193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.260227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.266869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.266976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.267009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.273615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.273731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.273774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.280392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.280535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.280568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.287160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.287284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.287317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.293857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.293952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.293995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.300565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.300679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.300713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.307311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.307464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.307497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.314027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.314147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.314193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.320876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.321005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.321042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.327998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.328167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.328209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.334924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.335070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.335149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.341872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.342061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.342103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.348717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.348842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.348889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.355530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.355685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.355763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.362510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.362681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.362723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.548 [2024-12-10 11:30:20.369404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.548 [2024-12-10 11:30:20.369506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.548 [2024-12-10 11:30:20.369550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.807 [2024-12-10 11:30:20.376245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.807 [2024-12-10 11:30:20.376387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.807 [2024-12-10 11:30:20.376431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.807 [2024-12-10 11:30:20.383068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.807 [2024-12-10 11:30:20.383191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.807 [2024-12-10 11:30:20.383226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.807 [2024-12-10 11:30:20.389892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.807 [2024-12-10 11:30:20.390018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.807 [2024-12-10 11:30:20.390062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.807 [2024-12-10 11:30:20.396893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.807 [2024-12-10 11:30:20.397008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.807 [2024-12-10 11:30:20.397055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.807 [2024-12-10 11:30:20.404001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.807 [2024-12-10 11:30:20.404134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.807 [2024-12-10 11:30:20.404168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.807 [2024-12-10 11:30:20.410836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.807 [2024-12-10 11:30:20.410961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.807 [2024-12-10 11:30:20.411016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.807 [2024-12-10 11:30:20.417700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.807 [2024-12-10 11:30:20.417811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.807 [2024-12-10 11:30:20.417852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.807 [2024-12-10 11:30:20.424274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.807 [2024-12-10 11:30:20.424381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.807 [2024-12-10 11:30:20.424413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.807 [2024-12-10 11:30:20.430641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.807 [2024-12-10 11:30:20.430788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.807 [2024-12-10 11:30:20.430828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.807 [2024-12-10 11:30:20.437032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.807 [2024-12-10 11:30:20.437155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.807 [2024-12-10 11:30:20.437194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.807 [2024-12-10 11:30:20.443463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.807 [2024-12-10 11:30:20.443589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.807 [2024-12-10 11:30:20.443620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.807 [2024-12-10 11:30:20.450337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.807 [2024-12-10 11:30:20.450513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.807 [2024-12-10 11:30:20.450555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.807 [2024-12-10 11:30:20.456901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.807 [2024-12-10 11:30:20.457000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.807 [2024-12-10 11:30:20.457040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.807 [2024-12-10 11:30:20.463772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.807 [2024-12-10 11:30:20.463903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.807 [2024-12-10 11:30:20.463937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.807 [2024-12-10 11:30:20.470483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.807 [2024-12-10 11:30:20.470610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.807 [2024-12-10 11:30:20.470650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.476985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.477104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.477142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.483465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.483597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.483628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.489832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.489930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.489969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.496384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.496525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.496564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.502829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.502963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.503025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.509491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.509618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.509656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.516292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.516438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.516480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.523209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.523346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.523377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.529827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.529963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.530000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.536339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.536478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.536518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.542831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.542999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.543033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.549347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.549470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.549501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.555771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.555893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.555945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.562309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.562469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.562501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.568799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.568911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.568942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.575154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.575265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.575304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.581662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.581776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.581825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.588122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.588234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.588264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.594445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.594559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.594599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.600836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.600953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.600984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.607510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.607620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.607652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.613959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.614082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.614121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.620710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.620874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.620906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.808 [2024-12-10 11:30:20.627703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:13.808 [2024-12-10 11:30:20.627843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.808 [2024-12-10 11:30:20.627876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.068 [2024-12-10 11:30:20.634445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.068 [2024-12-10 11:30:20.634561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.068 [2024-12-10 11:30:20.634599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.068 [2024-12-10 11:30:20.640993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.068 [2024-12-10 11:30:20.641108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.068 [2024-12-10 11:30:20.641139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.068 [2024-12-10 11:30:20.647595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.068 [2024-12-10 11:30:20.647738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.068 [2024-12-10 11:30:20.647772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.068 [2024-12-10 11:30:20.654159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.068 [2024-12-10 11:30:20.654289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.068 [2024-12-10 11:30:20.654335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.068 [2024-12-10 11:30:20.661027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.661143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.661176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.667990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.668167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.668199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.674873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.675006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.675052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.681658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.681792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.681823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.688545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.688664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.688699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.695324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.695491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.695543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.702304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.702462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.702503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.709256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.709386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.709439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.716049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.716157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.716201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.722914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.723072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.723105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.729819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.729959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.729995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.736460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.736583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.736623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.743116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.743252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.743285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.749774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.749918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.749951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.756493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.756657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.756711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.763217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.763422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.763491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.769990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.770167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.770210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.776640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.776751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.776793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.783326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.783475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.783509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.789861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.789981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.790013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.796464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.796594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.796637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.803012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.803161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.803195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.809817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.809956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.809990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.816684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.816813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.816854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.823169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.823286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.823319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.829912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.830030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.830063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.836545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.069 [2024-12-10 11:30:20.836661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.069 [2024-12-10 11:30:20.836703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.069 [2024-12-10 11:30:20.843237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.070 [2024-12-10 11:30:20.843362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.070 [2024-12-10 11:30:20.843413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.070 [2024-12-10 11:30:20.849866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.070 [2024-12-10 11:30:20.849981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.070 [2024-12-10 11:30:20.850014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.070 [2024-12-10 11:30:20.856416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.070 [2024-12-10 11:30:20.856525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.070 [2024-12-10 11:30:20.856566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.070 [2024-12-10 11:30:20.862946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.070 [2024-12-10 11:30:20.863082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.070 [2024-12-10 11:30:20.863115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.070 [2024-12-10 11:30:20.869722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.070 [2024-12-10 11:30:20.869861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.070 [2024-12-10 11:30:20.869893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.070 [2024-12-10 11:30:20.876359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.070 [2024-12-10 11:30:20.876518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.070 [2024-12-10 11:30:20.876570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.070 [2024-12-10 11:30:20.883124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.070 [2024-12-10 11:30:20.883307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.070 [2024-12-10 11:30:20.883349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.070 [2024-12-10 11:30:20.889973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.070 [2024-12-10 11:30:20.890124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.070 [2024-12-10 11:30:20.890163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.329 [2024-12-10 11:30:20.896757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.329 [2024-12-10 11:30:20.896870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.329 [2024-12-10 11:30:20.896912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.329 [2024-12-10 11:30:20.903557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.329 [2024-12-10 11:30:20.903662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.329 [2024-12-10 11:30:20.903707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.329 [2024-12-10 11:30:20.910445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.329 [2024-12-10 11:30:20.910602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.329 [2024-12-10 11:30:20.910641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.329 [2024-12-10 11:30:20.917159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.329 [2024-12-10 11:30:20.917277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.329 [2024-12-10 11:30:20.917321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.329 [2024-12-10 11:30:20.923967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.329 [2024-12-10 11:30:20.924079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.329 [2024-12-10 11:30:20.924114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.329 [2024-12-10 11:30:20.930739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.329 [2024-12-10 11:30:20.930860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.329 [2024-12-10 11:30:20.930899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.329 [2024-12-10 11:30:20.937442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.329 [2024-12-10 11:30:20.937585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.329 [2024-12-10 11:30:20.937628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.329 [2024-12-10 11:30:20.944114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.329 [2024-12-10 11:30:20.944256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.329 [2024-12-10 11:30:20.944289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.329 [2024-12-10 11:30:20.950884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.329 [2024-12-10 11:30:20.951028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.329 [2024-12-10 11:30:20.951068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.329 [2024-12-10 11:30:20.957551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.329 [2024-12-10 11:30:20.957684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.329 [2024-12-10 11:30:20.957727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.329 [2024-12-10 11:30:20.964327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.329 [2024-12-10 11:30:20.964472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.329 [2024-12-10 11:30:20.964507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.329 [2024-12-10 11:30:20.970912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.329 [2024-12-10 11:30:20.971053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.329 [2024-12-10 11:30:20.971092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.329 [2024-12-10 11:30:20.977658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.329 [2024-12-10 11:30:20.977761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.329 [2024-12-10 11:30:20.977809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.329 [2024-12-10 11:30:20.984430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.329 [2024-12-10 11:30:20.984565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:20.984602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:20.991224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:20.991335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:20.991376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:20.997966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:20.998096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:20.998140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.004627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.004778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.004811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.011245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.011379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.011444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.018019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.018129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.018170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.330 4571.00 IOPS, 571.38 MiB/s [2024-12-10T11:30:21.156Z] [2024-12-10 11:30:21.027060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.027234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.027277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.034119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.034253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.034304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.041100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.041229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.041281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.048004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.048163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.048203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.054899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.055039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.055083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.061819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.061926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.061967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.068745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.068913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.068946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.075600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.075717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.075760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.082302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.082431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.082473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.088973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.089095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.089128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.095699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.095835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.095879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.102320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.102463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.102505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.109053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.109188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.109221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.115756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.115879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.115921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.122374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.122493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.122535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.129151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.129276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.129310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.135859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.135975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.136009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.142504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.142628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.142661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.330 [2024-12-10 11:30:21.149140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.330 [2024-12-10 11:30:21.149243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.330 [2024-12-10 11:30:21.149277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.589 [2024-12-10 11:30:21.156005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.156129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.156162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.162740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.162845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.162878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.169416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.169531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.169563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.176045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.176177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.176210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.182650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.182761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.182794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.189323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.189450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.189483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.195899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.195999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.196050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.202544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.202641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.202674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.209306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.209422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.209472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.216104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.216247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.216285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.223071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.223215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.223259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.230127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.230270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.230313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.237046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.237198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.237242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.243882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.243975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.244009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.250517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.250631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.250664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.257254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.257410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.257444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.264007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.264198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.264231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.270845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.270945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.270978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.277663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.277783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.277817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.284403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.284495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.284529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.291020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.291133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.291167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.297927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.298064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.298097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.304812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.304910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.304943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.311773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.311885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.311919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.318430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.318564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.318595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.325048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.325149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.325180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.331619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.331778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.590 [2024-12-10 11:30:21.331811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.590 [2024-12-10 11:30:21.338162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.590 [2024-12-10 11:30:21.338262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.591 [2024-12-10 11:30:21.338294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.591 [2024-12-10 11:30:21.344809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.591 [2024-12-10 11:30:21.344936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.591 [2024-12-10 11:30:21.344968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.591 [2024-12-10 11:30:21.351303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.591 [2024-12-10 11:30:21.351420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.591 [2024-12-10 11:30:21.351452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.591 [2024-12-10 11:30:21.357828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.591 [2024-12-10 11:30:21.357943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.591 [2024-12-10 11:30:21.357978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.591 [2024-12-10 11:30:21.364479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.591 [2024-12-10 11:30:21.364597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.591 [2024-12-10 11:30:21.364630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.591 [2024-12-10 11:30:21.370877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.591 [2024-12-10 11:30:21.371003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.591 [2024-12-10 11:30:21.371035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.591 [2024-12-10 11:30:21.377601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.591 [2024-12-10 11:30:21.377718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.591 [2024-12-10 11:30:21.377752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.591 [2024-12-10 11:30:21.384609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.591 [2024-12-10 11:30:21.384720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.591 [2024-12-10 11:30:21.384752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.591 [2024-12-10 11:30:21.391440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.591 [2024-12-10 11:30:21.391560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.591 [2024-12-10 11:30:21.391594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.591 [2024-12-10 11:30:21.398375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.591 [2024-12-10 11:30:21.398554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.591 [2024-12-10 11:30:21.398587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.591 [2024-12-10 11:30:21.405278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.591 [2024-12-10 11:30:21.405384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.591 [2024-12-10 11:30:21.405432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.591 [2024-12-10 11:30:21.411984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.591 [2024-12-10 11:30:21.412086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.591 [2024-12-10 11:30:21.412121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.850 [2024-12-10 11:30:21.418808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.850 [2024-12-10 11:30:21.418919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.850 [2024-12-10 11:30:21.418954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.850 [2024-12-10 11:30:21.425669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.850 [2024-12-10 11:30:21.425792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.850 [2024-12-10 11:30:21.425826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.850 [2024-12-10 11:30:21.432485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.850 [2024-12-10 11:30:21.432618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.850 [2024-12-10 11:30:21.432652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.850 [2024-12-10 11:30:21.439455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.850 [2024-12-10 11:30:21.439570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.850 [2024-12-10 11:30:21.439603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.850 [2024-12-10 11:30:21.446265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.850 [2024-12-10 11:30:21.446396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.850 [2024-12-10 11:30:21.446430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.850 [2024-12-10 11:30:21.453049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.850 [2024-12-10 11:30:21.453151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.850 [2024-12-10 11:30:21.453185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.850 [2024-12-10 11:30:21.459813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.850 [2024-12-10 11:30:21.459907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.850 [2024-12-10 11:30:21.459940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.850 [2024-12-10 11:30:21.466416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.850 [2024-12-10 11:30:21.466524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.850 [2024-12-10 11:30:21.466557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.850 [2024-12-10 11:30:21.473182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.850 [2024-12-10 11:30:21.473298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.850 [2024-12-10 11:30:21.473331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.850 [2024-12-10 11:30:21.480010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.850 [2024-12-10 11:30:21.480148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.850 [2024-12-10 11:30:21.480183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.850 [2024-12-10 11:30:21.486706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.850 [2024-12-10 11:30:21.486816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.850 [2024-12-10 11:30:21.486850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.850 [2024-12-10 11:30:21.493459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.850 [2024-12-10 11:30:21.493576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.850 [2024-12-10 11:30:21.493610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.850 [2024-12-10 11:30:21.500304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.850 [2024-12-10 11:30:21.500461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.850 [2024-12-10 11:30:21.500495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.850 [2024-12-10 11:30:21.507021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.850 [2024-12-10 11:30:21.507152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.507186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.513665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.513784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.513819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.520561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.520676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.520708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.527133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.527259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.527292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.533897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.534034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.534067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.540650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.540777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.540810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.547241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.547376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.547426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.554042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.554173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.554207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.560758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.560887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.560921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.567492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.567609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.567642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.574202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.574335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.574368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.580914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.581022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.581055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.587701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.587841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.587873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.594406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.594517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.594551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.601115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.601231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.601263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.607768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.607868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.607901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.614501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.614615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.614649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.621326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.621451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.621485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.628215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.628343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.628410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.635026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.635129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.635161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.641732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.641861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.641893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.648487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.648612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.648645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.655187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.655288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.655320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.661991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.662084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.662117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.851 [2024-12-10 11:30:21.668806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:14.851 [2024-12-10 11:30:21.668915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.851 [2024-12-10 11:30:21.668948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.110 [2024-12-10 11:30:21.675756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.110 [2024-12-10 11:30:21.675866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.110 [2024-12-10 11:30:21.675899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.110 [2024-12-10 11:30:21.682596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.110 [2024-12-10 11:30:21.682737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.110 [2024-12-10 11:30:21.682770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.110 [2024-12-10 11:30:21.689200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.689303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.689336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.695928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.696048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.696081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.702607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.702710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.702741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.709160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.709268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.709299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.715829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.715942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.715975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.722580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.722705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.722754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.729561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.729697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.729731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.736172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.736300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.736333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.742957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.743099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.743132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.750025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.750138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.750170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.756719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.756834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.756868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.763634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.763770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.763804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.770417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.770526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.770559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.777113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.777206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.777239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.783895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.784012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.784067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.790694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.790831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.790865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.797548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.797661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.797694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.804211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.804342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.804375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.810908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.811024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.811071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.817672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.817816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.817849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.824344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.824488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.824525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.831011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.831122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.831155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.837740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.837861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.837893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.844496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.111 [2024-12-10 11:30:21.844608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.111 [2024-12-10 11:30:21.844642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.111 [2024-12-10 11:30:21.851095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.112 [2024-12-10 11:30:21.851218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.112 [2024-12-10 11:30:21.851251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.112 [2024-12-10 11:30:21.857934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.112 [2024-12-10 11:30:21.858046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.112 [2024-12-10 11:30:21.858078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.112 [2024-12-10 11:30:21.864592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.112 [2024-12-10 11:30:21.864720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.112 [2024-12-10 11:30:21.864752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.112 [2024-12-10 11:30:21.871153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.112 [2024-12-10 11:30:21.871261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.112 [2024-12-10 11:30:21.871293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.112 [2024-12-10 11:30:21.877919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.112 [2024-12-10 11:30:21.878021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.112 [2024-12-10 11:30:21.878053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.112 [2024-12-10 11:30:21.884642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.112 [2024-12-10 11:30:21.884759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.112 [2024-12-10 11:30:21.884791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.112 [2024-12-10 11:30:21.891162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.112 [2024-12-10 11:30:21.891264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.112 [2024-12-10 11:30:21.891296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.112 [2024-12-10 11:30:21.897909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.112 [2024-12-10 11:30:21.898039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.112 [2024-12-10 11:30:21.898071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.112 [2024-12-10 11:30:21.904640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.112 [2024-12-10 11:30:21.904755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.112 [2024-12-10 11:30:21.904787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.112 [2024-12-10 11:30:21.911158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.112 [2024-12-10 11:30:21.911268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.112 [2024-12-10 11:30:21.911299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.112 [2024-12-10 11:30:21.917941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.112 [2024-12-10 11:30:21.918071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.112 [2024-12-10 11:30:21.918104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.112 [2024-12-10 11:30:21.924662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.112 [2024-12-10 11:30:21.924787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.112 [2024-12-10 11:30:21.924820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.112 [2024-12-10 11:30:21.931289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.112 [2024-12-10 11:30:21.931410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.112 [2024-12-10 11:30:21.931444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.370 [2024-12-10 11:30:21.937997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.370 [2024-12-10 11:30:21.938108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.370 [2024-12-10 11:30:21.938141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.370 [2024-12-10 11:30:21.944676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.370 [2024-12-10 11:30:21.944781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.370 [2024-12-10 11:30:21.944814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.370 [2024-12-10 11:30:21.951292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.370 [2024-12-10 11:30:21.951421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.370 [2024-12-10 11:30:21.951454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.370 [2024-12-10 11:30:21.957980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.370 [2024-12-10 11:30:21.958108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.370 [2024-12-10 11:30:21.958141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.370 [2024-12-10 11:30:21.964673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.370 [2024-12-10 11:30:21.964785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.370 [2024-12-10 11:30:21.964818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.370 [2024-12-10 11:30:21.971261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.370 [2024-12-10 11:30:21.971376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.370 [2024-12-10 11:30:21.971424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.370 [2024-12-10 11:30:21.977895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.370 [2024-12-10 11:30:21.978023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.370 [2024-12-10 11:30:21.978054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.370 [2024-12-10 11:30:21.984670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.370 [2024-12-10 11:30:21.984791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.370 [2024-12-10 11:30:21.984826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.370 [2024-12-10 11:30:21.991391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.370 [2024-12-10 11:30:21.991511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.370 [2024-12-10 11:30:21.991544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.370 [2024-12-10 11:30:21.998113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.370 [2024-12-10 11:30:21.998206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.370 [2024-12-10 11:30:21.998239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.370 [2024-12-10 11:30:22.004898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.370 [2024-12-10 11:30:22.005009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.370 [2024-12-10 11:30:22.005042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.370 [2024-12-10 11:30:22.011621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.370 [2024-12-10 11:30:22.011751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.370 [2024-12-10 11:30:22.011784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.371 [2024-12-10 11:30:22.018358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:27:15.371 [2024-12-10 11:30:22.018488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.371 [2024-12-10 11:30:22.018521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.371 4579.50 IOPS, 572.44 MiB/s 00:27:15.371 Latency(us) 00:27:15.371 [2024-12-10T11:30:22.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.371 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:15.371 nvme0n1 : 2.01 4576.46 572.06 0.00 0.00 3487.17 2874.65 8817.57 00:27:15.371 [2024-12-10T11:30:22.197Z] =================================================================================================================== 00:27:15.371 [2024-12-10T11:30:22.197Z] Total : 4576.46 572.06 0.00 0.00 3487.17 2874.65 8817.57 00:27:15.371 { 00:27:15.371 "results": [ 00:27:15.371 { 00:27:15.371 "job": "nvme0n1", 00:27:15.371 "core_mask": "0x2", 00:27:15.371 "workload": "randwrite", 00:27:15.371 "status": "finished", 00:27:15.371 "queue_depth": 16, 00:27:15.371 "io_size": 131072, 00:27:15.371 "runtime": 2.005044, 00:27:15.371 "iops": 4576.458172488983, 00:27:15.371 "mibps": 572.0572715611229, 00:27:15.371 "io_failed": 0, 00:27:15.371 "io_timeout": 0, 00:27:15.371 "avg_latency_us": 3487.1684108742174, 00:27:15.371 "min_latency_us": 2874.6472727272726, 00:27:15.371 "max_latency_us": 8817.57090909091 00:27:15.371 } 00:27:15.371 ], 00:27:15.371 "core_count": 1 00:27:15.371 } 00:27:15.371 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:15.371 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:15.371 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:15.371 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:15.371 | .driver_specific 00:27:15.371 | .nvme_error 00:27:15.371 | .status_code 00:27:15.371 | .command_transient_transport_error' 00:27:15.629 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 296 > 0 )) 00:27:15.629 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87600 00:27:15.629 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 87600 ']' 00:27:15.629 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 87600 00:27:15.629 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:15.629 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:15.629 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87600 00:27:15.629 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:15.629 killing process with pid 87600 00:27:15.629 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:15.629 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87600' 00:27:15.629 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 87600 00:27:15.629 Received shutdown signal, test time was about 2.000000 seconds 00:27:15.629 00:27:15.629 Latency(us) 00:27:15.629 [2024-12-10T11:30:22.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.629 [2024-12-10T11:30:22.455Z] =================================================================================================================== 00:27:15.629 [2024-12-10T11:30:22.455Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:15.629 11:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 87600 00:27:17.004 11:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 87358 00:27:17.004 11:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 87358 ']' 00:27:17.004 11:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 87358 00:27:17.004 11:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:17.004 11:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:17.004 11:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87358 00:27:17.004 11:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:17.004 11:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:17.004 killing process with pid 87358 00:27:17.004 11:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87358' 00:27:17.004 11:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 87358 00:27:17.004 11:30:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 87358 00:27:17.949 ************************************ 00:27:17.949 END TEST nvmf_digest_error 00:27:17.949 ************************************ 00:27:17.949 00:27:17.949 real 0m23.485s 00:27:17.949 user 0m45.629s 00:27:17.949 sys 0m4.728s 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:17.949 rmmod nvme_tcp 00:27:17.949 rmmod nvme_fabrics 00:27:17.949 rmmod nvme_keyring 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 87358 ']' 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 87358 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 87358 ']' 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 87358 00:27:17.949 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87358) - No such process 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 87358 is not found' 00:27:17.949 Process with pid 87358 is not found 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:17.949 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:27:18.218 00:27:18.218 real 0m49.187s 00:27:18.218 user 1m33.824s 00:27:18.218 sys 0m9.870s 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:18.218 ************************************ 00:27:18.218 END TEST nvmf_digest 00:27:18.218 ************************************ 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.218 ************************************ 00:27:18.218 START TEST nvmf_host_multipath 00:27:18.218 ************************************ 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:18.218 * Looking for test storage... 00:27:18.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:27:18.218 11:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:18.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.477 --rc genhtml_branch_coverage=1 00:27:18.477 --rc genhtml_function_coverage=1 00:27:18.477 --rc genhtml_legend=1 00:27:18.477 --rc geninfo_all_blocks=1 00:27:18.477 --rc geninfo_unexecuted_blocks=1 00:27:18.477 00:27:18.477 ' 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:18.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.477 --rc genhtml_branch_coverage=1 00:27:18.477 --rc genhtml_function_coverage=1 00:27:18.477 --rc genhtml_legend=1 00:27:18.477 --rc geninfo_all_blocks=1 00:27:18.477 --rc geninfo_unexecuted_blocks=1 00:27:18.477 00:27:18.477 ' 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:18.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.477 --rc genhtml_branch_coverage=1 00:27:18.477 --rc genhtml_function_coverage=1 00:27:18.477 --rc genhtml_legend=1 00:27:18.477 --rc geninfo_all_blocks=1 00:27:18.477 --rc geninfo_unexecuted_blocks=1 00:27:18.477 00:27:18.477 ' 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:18.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.477 --rc genhtml_branch_coverage=1 00:27:18.477 --rc genhtml_function_coverage=1 00:27:18.477 --rc genhtml_legend=1 00:27:18.477 --rc geninfo_all_blocks=1 00:27:18.477 --rc geninfo_unexecuted_blocks=1 00:27:18.477 00:27:18.477 ' 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.477 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:18.478 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:18.478 Cannot find device "nvmf_init_br" 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:18.478 Cannot find device "nvmf_init_br2" 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:18.478 Cannot find device "nvmf_tgt_br" 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:18.478 Cannot find device "nvmf_tgt_br2" 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:18.478 Cannot find device "nvmf_init_br" 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:18.478 Cannot find device "nvmf_init_br2" 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:18.478 Cannot find device "nvmf_tgt_br" 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:18.478 Cannot find device "nvmf_tgt_br2" 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:18.478 Cannot find device "nvmf_br" 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:18.478 Cannot find device "nvmf_init_if" 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:18.478 Cannot find device "nvmf_init_if2" 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:18.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:18.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:18.478 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:18.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:18.736 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:27:18.736 00:27:18.736 --- 10.0.0.3 ping statistics --- 00:27:18.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.736 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:18.736 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:18.736 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:27:18.736 00:27:18.736 --- 10.0.0.4 ping statistics --- 00:27:18.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.736 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:18.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:27:18.736 00:27:18.736 --- 10.0.0.1 ping statistics --- 00:27:18.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.736 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:18.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:27:18.736 00:27:18.736 --- 10.0.0.2 ping statistics --- 00:27:18.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.736 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:18.736 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.737 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:18.737 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=87934 00:27:18.737 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 87934 00:27:18.737 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:18.737 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 87934 ']' 00:27:18.737 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.737 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.737 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.737 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.737 11:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:18.994 [2024-12-10 11:30:25.608590] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:27:18.994 [2024-12-10 11:30:25.608769] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.994 [2024-12-10 11:30:25.794016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:19.251 [2024-12-10 11:30:25.921178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.251 [2024-12-10 11:30:25.921260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.251 [2024-12-10 11:30:25.921284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.251 [2024-12-10 11:30:25.921312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.252 [2024-12-10 11:30:25.921329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.252 [2024-12-10 11:30:25.923439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.252 [2024-12-10 11:30:25.923440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.509 [2024-12-10 11:30:26.120011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:20.073 11:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.074 11:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:27:20.074 11:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:20.074 11:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:20.074 11:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:20.074 11:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.074 11:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=87934 00:27:20.074 11:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:20.331 [2024-12-10 11:30:26.904278] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.331 11:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:20.588 Malloc0 00:27:20.588 11:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:20.845 11:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:21.103 11:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:21.361 [2024-12-10 11:30:28.110562] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:21.361 11:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:21.619 [2024-12-10 11:30:28.370713] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:21.619 11:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=87992 00:27:21.619 11:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:21.619 11:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:21.619 11:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 87992 /var/tmp/bdevperf.sock 00:27:21.619 11:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 87992 ']' 00:27:21.619 11:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:21.619 11:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:21.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:21.619 11:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:21.619 11:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:21.619 11:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:22.992 11:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:22.992 11:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:27:22.992 11:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:22.992 11:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:23.250 Nvme0n1 00:27:23.507 11:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:23.768 Nvme0n1 00:27:23.768 11:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:27:23.768 11:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:24.726 11:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:27:24.726 11:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:25.292 11:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:25.551 11:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:27:25.551 11:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88037 00:27:25.551 11:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:25.551 11:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:32.111 11:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:32.111 11:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:32.111 11:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:32.111 11:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:32.111 Attaching 4 probes... 00:27:32.111 @path[10.0.0.3, 4421]: 13608 00:27:32.111 @path[10.0.0.3, 4421]: 14246 00:27:32.111 @path[10.0.0.3, 4421]: 14289 00:27:32.111 @path[10.0.0.3, 4421]: 14212 00:27:32.111 @path[10.0.0.3, 4421]: 13728 00:27:32.111 11:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:32.111 11:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:32.111 11:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:32.111 11:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:32.111 11:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:32.111 11:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:32.111 11:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88037 00:27:32.111 11:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:32.111 11:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:27:32.111 11:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:32.111 11:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:27:32.370 11:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:27:32.370 11:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88151 00:27:32.370 11:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:32.370 11:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:38.927 11:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:38.927 11:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:38.927 11:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:27:38.927 11:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:38.927 Attaching 4 probes... 00:27:38.927 @path[10.0.0.3, 4420]: 13570 00:27:38.927 @path[10.0.0.3, 4420]: 14303 00:27:38.928 @path[10.0.0.3, 4420]: 14579 00:27:38.928 @path[10.0.0.3, 4420]: 14220 00:27:38.928 @path[10.0.0.3, 4420]: 14099 00:27:38.928 11:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:38.928 11:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:38.928 11:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:38.928 11:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:27:38.928 11:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:38.928 11:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:38.928 11:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88151 00:27:38.928 11:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:38.928 11:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:27:38.928 11:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:27:38.928 11:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:39.494 11:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:27:39.494 11:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88268 00:27:39.494 11:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:39.494 11:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:46.053 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:46.053 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:46.053 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:46.053 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:46.053 Attaching 4 probes... 00:27:46.053 @path[10.0.0.3, 4421]: 12159 00:27:46.053 @path[10.0.0.3, 4421]: 13774 00:27:46.053 @path[10.0.0.3, 4421]: 13742 00:27:46.053 @path[10.0.0.3, 4421]: 13703 00:27:46.053 @path[10.0.0.3, 4421]: 13791 00:27:46.053 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:46.053 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:46.053 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:46.053 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:46.053 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:46.053 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:46.053 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88268 00:27:46.053 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:46.053 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:27:46.053 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:27:46.053 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:27:46.311 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:27:46.311 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:46.311 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88381 00:27:46.311 11:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:52.867 11:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:27:52.867 11:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:52.867 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:27:52.867 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:52.867 Attaching 4 probes... 00:27:52.867 00:27:52.867 00:27:52.867 00:27:52.867 00:27:52.867 00:27:52.867 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:52.867 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:52.867 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:52.867 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:27:52.867 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:27:52.867 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:27:52.867 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88381 00:27:52.867 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:52.867 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:27:52.867 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:52.867 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:53.433 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:27:53.433 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88500 00:27:53.433 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:53.433 11:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:59.986 11:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:59.986 11:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:59.986 11:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:59.986 11:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:59.987 Attaching 4 probes... 00:27:59.987 @path[10.0.0.3, 4421]: 13488 00:27:59.987 @path[10.0.0.3, 4421]: 13829 00:27:59.987 @path[10.0.0.3, 4421]: 13824 00:27:59.987 @path[10.0.0.3, 4421]: 13880 00:27:59.987 @path[10.0.0.3, 4421]: 13826 00:27:59.987 11:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:59.987 11:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:59.987 11:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:59.987 11:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:59.987 11:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:59.987 11:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:59.987 11:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88500 00:27:59.987 11:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:59.987 11:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:59.987 [2024-12-10 11:31:06.645874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:27:59.987 [2024-12-10 11:31:06.646105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:27:59.987 [2024-12-10 11:31:06.646134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:27:59.987 11:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:28:00.920 11:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:28:00.920 11:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88617 00:28:00.920 11:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:00.920 11:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:07.477 11:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:07.477 11:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:07.477 11:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:28:07.478 11:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:07.478 Attaching 4 probes... 00:28:07.478 @path[10.0.0.3, 4420]: 13482 00:28:07.478 @path[10.0.0.3, 4420]: 13697 00:28:07.478 @path[10.0.0.3, 4420]: 13813 00:28:07.478 @path[10.0.0.3, 4420]: 13785 00:28:07.478 @path[10.0.0.3, 4420]: 13831 00:28:07.478 11:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:07.478 11:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:07.478 11:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:28:07.478 11:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:28:07.478 11:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:07.478 11:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:07.478 11:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88617 00:28:07.478 11:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:07.478 11:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:28:07.478 [2024-12-10 11:31:14.241893] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:28:07.478 11:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:08.044 11:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:28:14.606 11:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:28:14.606 11:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88788 00:28:14.606 11:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:14.606 11:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:19.871 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:19.871 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:20.129 Attaching 4 probes... 00:28:20.129 @path[10.0.0.3, 4421]: 11214 00:28:20.129 @path[10.0.0.3, 4421]: 12919 00:28:20.129 @path[10.0.0.3, 4421]: 13481 00:28:20.129 @path[10.0.0.3, 4421]: 13626 00:28:20.129 @path[10.0.0.3, 4421]: 13555 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88788 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 87992 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 87992 ']' 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 87992 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87992 00:28:20.129 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:20.129 killing process with pid 87992 00:28:20.130 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:20.130 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87992' 00:28:20.130 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 87992 00:28:20.130 11:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 87992 00:28:20.130 { 00:28:20.130 "results": [ 00:28:20.130 { 00:28:20.130 "job": "Nvme0n1", 00:28:20.130 "core_mask": "0x4", 00:28:20.130 "workload": "verify", 00:28:20.130 "status": "terminated", 00:28:20.130 "verify_range": { 00:28:20.130 "start": 0, 00:28:20.130 "length": 16384 00:28:20.130 }, 00:28:20.130 "queue_depth": 128, 00:28:20.130 "io_size": 4096, 00:28:20.130 "runtime": 56.354592, 00:28:20.130 "iops": 5919.748296642801, 00:28:20.130 "mibps": 23.12401678376094, 00:28:20.130 "io_failed": 0, 00:28:20.130 "io_timeout": 0, 00:28:20.130 "avg_latency_us": 21593.20244624631, 00:28:20.130 "min_latency_us": 1608.610909090909, 00:28:20.130 "max_latency_us": 7046430.72 00:28:20.130 } 00:28:20.130 ], 00:28:20.130 "core_count": 1 00:28:20.130 } 00:28:21.514 11:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 87992 00:28:21.514 11:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:21.514 [2024-12-10 11:30:28.512539] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:28:21.514 [2024-12-10 11:30:28.512721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87992 ] 00:28:21.514 [2024-12-10 11:30:28.697830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.514 [2024-12-10 11:30:28.821899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.514 [2024-12-10 11:30:29.030147] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:21.514 Running I/O for 90 seconds... 00:28:21.514 7137.00 IOPS, 27.88 MiB/s [2024-12-10T11:31:28.340Z] 7268.50 IOPS, 28.39 MiB/s [2024-12-10T11:31:28.340Z] 7197.67 IOPS, 28.12 MiB/s [2024-12-10T11:31:28.340Z] 7164.25 IOPS, 27.99 MiB/s [2024-12-10T11:31:28.340Z] 7163.40 IOPS, 27.98 MiB/s [2024-12-10T11:31:28.340Z] 7161.50 IOPS, 27.97 MiB/s [2024-12-10T11:31:28.340Z] 7115.57 IOPS, 27.80 MiB/s [2024-12-10T11:31:28.340Z] 7090.12 IOPS, 27.70 MiB/s [2024-12-10T11:31:28.340Z] [2024-12-10 11:30:39.035782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.514 [2024-12-10 11:30:39.035888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:21.514 [2024-12-10 11:30:39.035991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.514 [2024-12-10 11:30:39.036024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:21.514 [2024-12-10 11:30:39.036061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.514 [2024-12-10 11:30:39.036085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.514 [2024-12-10 11:30:39.036117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.514 [2024-12-10 11:30:39.036141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.514 [2024-12-10 11:30:39.036173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.514 [2024-12-10 11:30:39.036197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:21.514 [2024-12-10 11:30:39.036228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.514 [2024-12-10 11:30:39.036252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:21.514 [2024-12-10 11:30:39.036283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.514 [2024-12-10 11:30:39.036306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:21.514 [2024-12-10 11:30:39.036338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.514 [2024-12-10 11:30:39.036380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:21.514 [2024-12-10 11:30:39.036416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.514 [2024-12-10 11:30:39.036441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:21.514 [2024-12-10 11:30:39.036472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.514 [2024-12-10 11:30:39.036519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:21.514 [2024-12-10 11:30:39.036554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.514 [2024-12-10 11:30:39.036579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:21.514 [2024-12-10 11:30:39.036611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.514 [2024-12-10 11:30:39.036635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:21.514 [2024-12-10 11:30:39.036667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.514 [2024-12-10 11:30:39.036690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:21.514 [2024-12-10 11:30:39.036721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.514 [2024-12-10 11:30:39.036745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:21.514 [2024-12-10 11:30:39.036776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.514 [2024-12-10 11:30:39.036800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:21.514 [2024-12-10 11:30:39.036831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.515 [2024-12-10 11:30:39.036855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.036887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.515 [2024-12-10 11:30:39.036910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.036942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.515 [2024-12-10 11:30:39.036966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.036997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.515 [2024-12-10 11:30:39.037021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.515 [2024-12-10 11:30:39.037091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.515 [2024-12-10 11:30:39.037147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.515 [2024-12-10 11:30:39.037211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.515 [2024-12-10 11:30:39.037270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.515 [2024-12-10 11:30:39.037325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.037429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.037488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.037542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.037597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.037651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.037705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.037760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.037814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.037869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.037923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.037967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.037993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.038024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.038048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.038080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.038103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.038134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.038165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.038196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.038220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.038251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.515 [2024-12-10 11:30:39.038275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.038306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.515 [2024-12-10 11:30:39.038330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:21.515 [2024-12-10 11:30:39.038377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.038404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.038458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.038482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.038514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.038538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.038569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.038593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.038625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.038649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.038691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.038716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.038748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.038773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.038804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.038828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.038859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.038883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.038914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.038938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.038970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.038993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.039048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.039104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.039159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.039213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.516 [2024-12-10 11:30:39.039268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.516 [2024-12-10 11:30:39.039323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.516 [2024-12-10 11:30:39.039408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.516 [2024-12-10 11:30:39.039467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.516 [2024-12-10 11:30:39.039522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.516 [2024-12-10 11:30:39.039576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.516 [2024-12-10 11:30:39.039631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.516 [2024-12-10 11:30:39.039700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.039760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.039814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.039868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.516 [2024-12-10 11:30:39.039900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.516 [2024-12-10 11:30:39.039925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.039955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.517 [2024-12-10 11:30:39.039979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.517 [2024-12-10 11:30:39.040034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.517 [2024-12-10 11:30:39.040097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.517 [2024-12-10 11:30:39.040156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.517 [2024-12-10 11:30:39.040225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.517 [2024-12-10 11:30:39.040282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.517 [2024-12-10 11:30:39.040338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.517 [2024-12-10 11:30:39.040412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.517 [2024-12-10 11:30:39.040468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.517 [2024-12-10 11:30:39.040523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.517 [2024-12-10 11:30:39.040579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.517 [2024-12-10 11:30:39.040637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.517 [2024-12-10 11:30:39.040702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.517 [2024-12-10 11:30:39.040758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.517 [2024-12-10 11:30:39.040813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.517 [2024-12-10 11:30:39.040882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.517 [2024-12-10 11:30:39.040937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.040968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.517 [2024-12-10 11:30:39.040992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.041024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.517 [2024-12-10 11:30:39.041048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.041079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.517 [2024-12-10 11:30:39.041103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.041134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.517 [2024-12-10 11:30:39.041158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.041188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.517 [2024-12-10 11:30:39.041212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.041242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.517 [2024-12-10 11:30:39.041266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.041297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.517 [2024-12-10 11:30:39.041321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:21.517 [2024-12-10 11:30:39.041366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.517 [2024-12-10 11:30:39.041393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.041426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.041451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.041482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.041506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.041547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.041572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.041604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.041628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.041659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.041682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.041713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.041737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.041768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.041791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.041822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.041846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.041877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.041901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.041932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.041955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.041986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.042009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.042041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.518 [2024-12-10 11:30:39.042064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.042095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.518 [2024-12-10 11:30:39.042119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.042169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.518 [2024-12-10 11:30:39.042194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.042225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.518 [2024-12-10 11:30:39.042258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.042299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.518 [2024-12-10 11:30:39.042326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.042372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.518 [2024-12-10 11:30:39.042399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.042433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.518 [2024-12-10 11:30:39.042457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.044312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.518 [2024-12-10 11:30:39.044374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.044435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.044468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.044503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.044528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.044560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.044584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.044615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.044639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.044670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.044694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.044725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.044749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.044781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.044805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:21.518 [2024-12-10 11:30:39.044859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.518 [2024-12-10 11:30:39.044906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:39.044942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:39.044967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:39.044999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:39.045023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:39.045055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:39.045078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:39.045109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:39.045133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:39.045164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:39.045188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:39.045220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:39.045243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:39.045275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:39.045298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:39.045336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:39.045378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:21.519 7070.44 IOPS, 27.62 MiB/s [2024-12-10T11:31:28.345Z] 7047.40 IOPS, 27.53 MiB/s [2024-12-10T11:31:28.345Z] 7056.91 IOPS, 27.57 MiB/s [2024-12-10T11:31:28.345Z] 7076.83 IOPS, 27.64 MiB/s [2024-12-10T11:31:28.345Z] 7078.92 IOPS, 27.65 MiB/s [2024-12-10T11:31:28.345Z] 7077.86 IOPS, 27.65 MiB/s [2024-12-10T11:31:28.345Z] 7079.07 IOPS, 27.65 MiB/s [2024-12-10T11:31:28.345Z] [2024-12-10 11:30:45.724044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:45.724130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.724219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:45.724252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.724288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:45.724313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.724390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:45.724418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.724450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:45.724474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.724505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:45.724528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.724559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:45.724582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.724614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.519 [2024-12-10 11:30:45.724637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.724668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.519 [2024-12-10 11:30:45.724691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.724723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.519 [2024-12-10 11:30:45.724746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.724777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.519 [2024-12-10 11:30:45.724800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.724832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.519 [2024-12-10 11:30:45.724854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.724885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.519 [2024-12-10 11:30:45.724909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.724940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.519 [2024-12-10 11:30:45.724964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.724996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.519 [2024-12-10 11:30:45.725019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.725051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.519 [2024-12-10 11:30:45.725085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.725119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.519 [2024-12-10 11:30:45.725144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:21.519 [2024-12-10 11:30:45.725175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.725199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.725230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.725254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.725285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.725309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.725341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.725382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.725416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.725441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.725472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.725496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.725529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.725564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.725608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.520 [2024-12-10 11:30:45.725635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.725668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.520 [2024-12-10 11:30:45.725692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.725724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.520 [2024-12-10 11:30:45.725748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.725779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.520 [2024-12-10 11:30:45.725813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.725848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.520 [2024-12-10 11:30:45.725872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.725903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.520 [2024-12-10 11:30:45.725928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.725959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.520 [2024-12-10 11:30:45.725990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.726026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.520 [2024-12-10 11:30:45.726050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.726082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.726106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.726138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.726162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.726193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.726217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.726248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.726272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.726304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.726328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.726374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.726402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.726434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.726470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.726505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.726529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.726573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.726598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.726630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.726654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.726706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.726731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.726762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.726786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.726818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.726841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:21.520 [2024-12-10 11:30:45.726872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.520 [2024-12-10 11:30:45.726896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.726927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.726951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.726982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.727005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.727933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.727964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.727996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.728053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.728107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.728162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.728216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.728270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.728325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.728404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.728480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.728536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.728590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.728645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.728709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.728768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.728830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.521 [2024-12-10 11:30:45.728896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.728961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.728990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.729025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.729049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.729080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.729104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:21.521 [2024-12-10 11:30:45.729135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.521 [2024-12-10 11:30:45.729159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.729191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.729214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.729245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.729269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.729301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.729335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.729391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.729416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.729447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.729484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.729519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.729543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.729574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.729597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.729629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.729652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.729683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.729707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.729737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.729767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.729806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.729833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.729871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.729896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.729927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.729951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.729982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.730006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.730037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.730061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.730092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.730116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.730147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.730170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.730211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.730237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.730269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.730292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.731317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.522 [2024-12-10 11:30:45.731375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.731429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.731456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.731496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.731521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.731582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.731607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.731646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.731671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.731741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.731770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.731810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.731838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.731888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.731912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.731977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.732007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.732047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.732072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.732125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.732156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.732200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.732224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.732263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.732286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.732325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.732363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.732408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.732432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.732471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.732495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.732552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.732581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.732631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.732659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.732698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.732722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.732761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.522 [2024-12-10 11:30:45.732785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:21.522 [2024-12-10 11:30:45.732822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.523 [2024-12-10 11:30:45.732846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:45.732885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.523 [2024-12-10 11:30:45.732909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:45.732948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.523 [2024-12-10 11:30:45.732983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:45.733024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.523 [2024-12-10 11:30:45.733048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:45.733088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.523 [2024-12-10 11:30:45.733112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:21.523 6698.62 IOPS, 26.17 MiB/s [2024-12-10T11:31:28.349Z] 6656.65 IOPS, 26.00 MiB/s [2024-12-10T11:31:28.349Z] 6670.83 IOPS, 26.06 MiB/s [2024-12-10T11:31:28.349Z] 6681.00 IOPS, 26.10 MiB/s [2024-12-10T11:31:28.349Z] 6688.95 IOPS, 26.13 MiB/s [2024-12-10T11:31:28.349Z] 6699.19 IOPS, 26.17 MiB/s [2024-12-10T11:31:28.349Z] 6710.82 IOPS, 26.21 MiB/s [2024-12-10T11:31:28.349Z] [2024-12-10 11:30:52.939914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.523 [2024-12-10 11:30:52.940000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.523 [2024-12-10 11:30:52.940124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.523 [2024-12-10 11:30:52.940185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.523 [2024-12-10 11:30:52.940241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.523 [2024-12-10 11:30:52.940297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.523 [2024-12-10 11:30:52.940370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.523 [2024-12-10 11:30:52.940434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.523 [2024-12-10 11:30:52.940490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.940545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.940625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.940680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.940735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.940790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.940844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.940899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.940954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.940985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.523 [2024-12-10 11:30:52.941872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:21.523 [2024-12-10 11:30:52.941911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.523 [2024-12-10 11:30:52.941937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.941971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.942959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.942983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.524 [2024-12-10 11:30:52.943039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.524 [2024-12-10 11:30:52.943094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.524 [2024-12-10 11:30:52.943151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.524 [2024-12-10 11:30:52.943206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.524 [2024-12-10 11:30:52.943262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.524 [2024-12-10 11:30:52.943318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.524 [2024-12-10 11:30:52.943392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.524 [2024-12-10 11:30:52.943449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.943515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.943571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.943627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.943682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.943756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.943813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.943869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.943923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.943956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.943980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.944012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.944036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.944106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.944132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.944163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.944187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.944228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.944254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:21.524 [2024-12-10 11:30:52.944286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.524 [2024-12-10 11:30:52.944311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.944394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.944427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.944462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.944487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.944518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.944542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.944573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.525 [2024-12-10 11:30:52.944597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.944628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.525 [2024-12-10 11:30:52.944652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.944683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.525 [2024-12-10 11:30:52.944707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.944738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.525 [2024-12-10 11:30:52.944761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.944793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.525 [2024-12-10 11:30:52.944818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.944849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.525 [2024-12-10 11:30:52.944872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.944903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.525 [2024-12-10 11:30:52.944927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.944958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.525 [2024-12-10 11:30:52.944995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.945965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.945988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.946019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.946043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.946073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.525 [2024-12-10 11:30:52.946097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.946128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.525 [2024-12-10 11:30:52.946152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.946183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.525 [2024-12-10 11:30:52.946207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.946238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.525 [2024-12-10 11:30:52.946262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.946293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.525 [2024-12-10 11:30:52.946317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:21.525 [2024-12-10 11:30:52.946384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.526 [2024-12-10 11:30:52.946412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.946455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.526 [2024-12-10 11:30:52.946481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.946512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.526 [2024-12-10 11:30:52.946537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.947447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.526 [2024-12-10 11:30:52.947491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.947542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.947569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.947610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.947635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.947676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.947717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.947761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.947786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.947826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.947851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.947891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.947916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.947956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.947981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.948042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.948072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.948113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.948138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.948191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.948218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.948258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.948283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.948322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.948346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.948415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.948447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.948488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.948514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.948566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.948611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.948662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.948690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.948730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.948755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:30:52.948795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:30:52.948820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:21.526 6526.39 IOPS, 25.49 MiB/s [2024-12-10T11:31:28.352Z] 6254.46 IOPS, 24.43 MiB/s [2024-12-10T11:31:28.352Z] 6004.28 IOPS, 23.45 MiB/s [2024-12-10T11:31:28.352Z] 5773.35 IOPS, 22.55 MiB/s [2024-12-10T11:31:28.352Z] 5559.52 IOPS, 21.72 MiB/s [2024-12-10T11:31:28.352Z] 5360.96 IOPS, 20.94 MiB/s [2024-12-10T11:31:28.352Z] 5176.10 IOPS, 20.22 MiB/s [2024-12-10T11:31:28.352Z] 5147.77 IOPS, 20.11 MiB/s [2024-12-10T11:31:28.352Z] 5203.65 IOPS, 20.33 MiB/s [2024-12-10T11:31:28.352Z] 5257.03 IOPS, 20.54 MiB/s [2024-12-10T11:31:28.352Z] 5307.18 IOPS, 20.73 MiB/s [2024-12-10T11:31:28.352Z] 5355.79 IOPS, 20.92 MiB/s [2024-12-10T11:31:28.352Z] 5400.26 IOPS, 21.09 MiB/s [2024-12-10T11:31:28.352Z] 5442.47 IOPS, 21.26 MiB/s [2024-12-10T11:31:28.352Z] [2024-12-10 11:31:06.645670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:31:06.645756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:31:06.645843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:31:06.645877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:31:06.645941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:31:06.645967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:31:06.646000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:31:06.646023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:31:06.646055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:31:06.646078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:31:06.646109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:31:06.646133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:31:06.646164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:31:06.646187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:31:06.646218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:31:06.646241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:31:06.646272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:31:06.646295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:31:06.646326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:31:06.646364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:31:06.646402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:31:06.646425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:31:06.646456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:31:06.646478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:31:06.646510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:31:06.646533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:31:06.646564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:31:06.646587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:21.526 [2024-12-10 11:31:06.646630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.526 [2024-12-10 11:31:06.646658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.646690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.646714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.646746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.646769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.646802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.646835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.646873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.646897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.646929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.646952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.646983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.647889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.647931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.647972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.647994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.648014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.648071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.648113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.648154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.648220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.648273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.648315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.648376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.648422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.648464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.648506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.648548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.648589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.527 [2024-12-10 11:31:06.648641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.648685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.527 [2024-12-10 11:31:06.648727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.527 [2024-12-10 11:31:06.648748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.528 [2024-12-10 11:31:06.648768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.648790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.528 [2024-12-10 11:31:06.648811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.648832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.528 [2024-12-10 11:31:06.648853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.648874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.528 [2024-12-10 11:31:06.648894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.648916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.528 [2024-12-10 11:31:06.648936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.648957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.528 [2024-12-10 11:31:06.648977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.648999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.649019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.649062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.649105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.649147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.649203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.649246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.649289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.649331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.528 [2024-12-10 11:31:06.649391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.528 [2024-12-10 11:31:06.649433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.528 [2024-12-10 11:31:06.649475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.528 [2024-12-10 11:31:06.649535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.528 [2024-12-10 11:31:06.649581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.528 [2024-12-10 11:31:06.649623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.528 [2024-12-10 11:31:06.649665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.528 [2024-12-10 11:31:06.649706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.649757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.649803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.649844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.649888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.649930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.649972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.649993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.650013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.650035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.650055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.650077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.650097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.650119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.650139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.650161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.650181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.650203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.650223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.650245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.650265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.650295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.650317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.650339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.650375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.650399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.650420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.650442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.650473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.650495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.650515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.528 [2024-12-10 11:31:06.650537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.528 [2024-12-10 11:31:06.650559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.650581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.529 [2024-12-10 11:31:06.650602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.650623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.529 [2024-12-10 11:31:06.650644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.650666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.529 [2024-12-10 11:31:06.650695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.650718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.529 [2024-12-10 11:31:06.650738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.650760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.529 [2024-12-10 11:31:06.650779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.650801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.529 [2024-12-10 11:31:06.650822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.650843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLO 11:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:21.529 CK TRANSPORT 0x0 00:28:21.529 [2024-12-10 11:31:06.650871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.650895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.529 [2024-12-10 11:31:06.650916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.650937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.529 [2024-12-10 11:31:06.650957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.650990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.529 [2024-12-10 11:31:06.651010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.529 [2024-12-10 11:31:06.651052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.529 [2024-12-10 11:31:06.651113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.529 [2024-12-10 11:31:06.651155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.529 [2024-12-10 11:31:06.651197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.529 [2024-12-10 11:31:06.651239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.529 [2024-12-10 11:31:06.651283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.529 [2024-12-10 11:31:06.651325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.529 [2024-12-10 11:31:06.651386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.529 [2024-12-10 11:31:06.651430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.529 [2024-12-10 11:31:06.651481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bf00 is same with the state(6) to be set 00:28:21.529 [2024-12-10 11:31:06.651530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:21.529 [2024-12-10 11:31:06.651556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:21.529 [2024-12-10 11:31:06.651573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129408 len:8 PRP1 0x0 PRP2 0x0 00:28:21.529 [2024-12-10 11:31:06.651604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:21.529 [2024-12-10 11:31:06.651641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:21.529 [2024-12-10 11:31:06.651657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129928 len:8 PRP1 0x0 PRP2 0x0 00:28:21.529 [2024-12-10 11:31:06.651676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:21.529 [2024-12-10 11:31:06.651725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:21.529 [2024-12-10 11:31:06.651741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129936 len:8 PRP1 0x0 PRP2 0x0 00:28:21.529 [2024-12-10 11:31:06.651760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:21.529 [2024-12-10 11:31:06.651794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:21.529 [2024-12-10 11:31:06.651809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129944 len:8 PRP1 0x0 PRP2 0x0 00:28:21.529 [2024-12-10 11:31:06.651828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:21.529 [2024-12-10 11:31:06.651862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:21.529 [2024-12-10 11:31:06.651877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129952 len:8 PRP1 0x0 PRP2 0x0 00:28:21.529 [2024-12-10 11:31:06.651896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:21.529 [2024-12-10 11:31:06.651936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:21.529 [2024-12-10 11:31:06.651956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129960 len:8 PRP1 0x0 PRP2 0x0 00:28:21.529 [2024-12-10 11:31:06.651976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.651995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:21.529 [2024-12-10 11:31:06.652009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:21.529 [2024-12-10 11:31:06.652025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129968 len:8 PRP1 0x0 PRP2 0x0 00:28:21.529 [2024-12-10 11:31:06.652044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.529 [2024-12-10 11:31:06.652071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:21.529 [2024-12-10 11:31:06.652088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:21.529 [2024-12-10 11:31:06.652103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129976 len:8 PRP1 0x0 PRP2 0x0 00:28:21.529 [2024-12-10 11:31:06.652122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.530 [2024-12-10 11:31:06.652141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:21.530 [2024-12-10 11:31:06.652156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:21.530 [2024-12-10 11:31:06.652171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129984 len:8 PRP1 0x0 PRP2 0x0 00:28:21.530 [2024-12-10 11:31:06.652190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.530 [2024-12-10 11:31:06.652209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:21.530 [2024-12-10 11:31:06.652232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:21.530 [2024-12-10 11:31:06.652248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129992 len:8 PRP1 0x0 PRP2 0x0 00:28:21.530 [2024-12-10 11:31:06.652266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.530 [2024-12-10 11:31:06.652285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:21.530 [2024-12-10 11:31:06.652299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:21.530 [2024-12-10 11:31:06.652315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130000 len:8 PRP1 0x0 PRP2 0x0 00:28:21.530 [2024-12-10 11:31:06.652334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.530 [2024-12-10 11:31:06.652368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:21.530 [2024-12-10 11:31:06.652387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:21.530 [2024-12-10 11:31:06.652403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130008 len:8 PRP1 0x0 PRP2 0x0 00:28:21.530 [2024-12-10 11:31:06.652422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.530 [2024-12-10 11:31:06.652441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:21.530 [2024-12-10 11:31:06.652456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:21.530 [2024-12-10 11:31:06.652472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130016 len:8 PRP1 0x0 PRP2 0x0 00:28:21.530 [2024-12-10 11:31:06.652490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.530 [2024-12-10 11:31:06.654183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:21.530 [2024-12-10 11:31:06.654307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.530 [2024-12-10 11:31:06.654341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.530 [2024-12-10 11:31:06.654420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:28:21.530 [2024-12-10 11:31:06.654966] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.530 [2024-12-10 11:31:06.655021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b500 with addr=10.0.0.3, port=4421 00:28:21.530 [2024-12-10 11:31:06.655063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:28:21.530 [2024-12-10 11:31:06.655159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:28:21.530 [2024-12-10 11:31:06.655216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:21.530 [2024-12-10 11:31:06.655244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:21.530 [2024-12-10 11:31:06.655265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:21.530 [2024-12-10 11:31:06.655295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:21.530 [2024-12-10 11:31:06.655317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:21.530 5480.70 IOPS, 21.41 MiB/s [2024-12-10T11:31:28.356Z] 5515.42 IOPS, 21.54 MiB/s [2024-12-10T11:31:28.356Z] 5549.90 IOPS, 21.68 MiB/s [2024-12-10T11:31:28.356Z] 5583.05 IOPS, 21.81 MiB/s [2024-12-10T11:31:28.356Z] 5615.27 IOPS, 21.93 MiB/s [2024-12-10T11:31:28.356Z] 5644.81 IOPS, 22.05 MiB/s [2024-12-10T11:31:28.356Z] 5676.33 IOPS, 22.17 MiB/s [2024-12-10T11:31:28.356Z] 5703.86 IOPS, 22.28 MiB/s [2024-12-10T11:31:28.356Z] 5730.53 IOPS, 22.38 MiB/s [2024-12-10T11:31:28.356Z] 5755.00 IOPS, 22.48 MiB/s [2024-12-10T11:31:28.356Z] [2024-12-10 11:31:16.751727] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:28:21.530 5778.96 IOPS, 22.57 MiB/s [2024-12-10T11:31:28.356Z] 5801.44 IOPS, 22.66 MiB/s [2024-12-10T11:31:28.356Z] 5821.16 IOPS, 22.74 MiB/s [2024-12-10T11:31:28.356Z] 5839.62 IOPS, 22.81 MiB/s [2024-12-10T11:31:28.356Z] 5846.37 IOPS, 22.84 MiB/s [2024-12-10T11:31:28.356Z] 5849.33 IOPS, 22.85 MiB/s [2024-12-10T11:31:28.356Z] 5864.85 IOPS, 22.91 MiB/s [2024-12-10T11:31:28.356Z] 5881.87 IOPS, 22.98 MiB/s [2024-12-10T11:31:28.356Z] 5898.85 IOPS, 23.04 MiB/s [2024-12-10T11:31:28.356Z] 5914.38 IOPS, 23.10 MiB/s [2024-12-10T11:31:28.356Z] Received shutdown signal, test time was about 56.355528 seconds 00:28:21.530 00:28:21.530 Latency(us) 00:28:21.530 [2024-12-10T11:31:28.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.530 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:21.530 Verification LBA range: start 0x0 length 0x4000 00:28:21.530 Nvme0n1 : 56.35 5919.75 23.12 0.00 0.00 21593.20 1608.61 7046430.72 00:28:21.530 [2024-12-10T11:31:28.356Z] =================================================================================================================== 00:28:21.530 [2024-12-10T11:31:28.356Z] Total : 5919.75 23.12 0.00 0.00 21593.20 1608.61 7046430.72 00:28:21.530 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:28:21.530 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:21.530 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:28:21.530 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:21.530 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:28:21.530 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:21.530 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:28:21.530 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:21.530 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:21.530 rmmod nvme_tcp 00:28:21.530 rmmod nvme_fabrics 00:28:21.788 rmmod nvme_keyring 00:28:21.788 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:21.788 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:28:21.788 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:28:21.788 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 87934 ']' 00:28:21.788 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 87934 00:28:21.788 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 87934 ']' 00:28:21.788 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 87934 00:28:21.788 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:28:21.788 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:21.788 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87934 00:28:21.789 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:21.789 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:21.789 killing process with pid 87934 00:28:21.789 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87934' 00:28:21.789 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 87934 00:28:21.789 11:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 87934 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:28:23.164 00:28:23.164 real 1m4.903s 00:28:23.164 user 3m0.980s 00:28:23.164 sys 0m17.093s 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:23.164 ************************************ 00:28:23.164 END TEST nvmf_host_multipath 00:28:23.164 ************************************ 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.164 ************************************ 00:28:23.164 START TEST nvmf_timeout 00:28:23.164 ************************************ 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:23.164 * Looking for test storage... 00:28:23.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:28:23.164 11:31:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:23.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.423 --rc genhtml_branch_coverage=1 00:28:23.423 --rc genhtml_function_coverage=1 00:28:23.423 --rc genhtml_legend=1 00:28:23.423 --rc geninfo_all_blocks=1 00:28:23.423 --rc geninfo_unexecuted_blocks=1 00:28:23.423 00:28:23.423 ' 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:23.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.423 --rc genhtml_branch_coverage=1 00:28:23.423 --rc genhtml_function_coverage=1 00:28:23.423 --rc genhtml_legend=1 00:28:23.423 --rc geninfo_all_blocks=1 00:28:23.423 --rc geninfo_unexecuted_blocks=1 00:28:23.423 00:28:23.423 ' 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:23.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.423 --rc genhtml_branch_coverage=1 00:28:23.423 --rc genhtml_function_coverage=1 00:28:23.423 --rc genhtml_legend=1 00:28:23.423 --rc geninfo_all_blocks=1 00:28:23.423 --rc geninfo_unexecuted_blocks=1 00:28:23.423 00:28:23.423 ' 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:23.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.423 --rc genhtml_branch_coverage=1 00:28:23.423 --rc genhtml_function_coverage=1 00:28:23.423 --rc genhtml_legend=1 00:28:23.423 --rc geninfo_all_blocks=1 00:28:23.423 --rc geninfo_unexecuted_blocks=1 00:28:23.423 00:28:23.423 ' 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:23.423 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:23.423 Cannot find device "nvmf_init_br" 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:23.423 Cannot find device "nvmf_init_br2" 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:23.423 Cannot find device "nvmf_tgt_br" 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:23.423 Cannot find device "nvmf_tgt_br2" 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:23.423 Cannot find device "nvmf_init_br" 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:23.423 Cannot find device "nvmf_init_br2" 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:23.423 Cannot find device "nvmf_tgt_br" 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:23.423 Cannot find device "nvmf_tgt_br2" 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:23.423 Cannot find device "nvmf_br" 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:23.423 Cannot find device "nvmf_init_if" 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:23.423 Cannot find device "nvmf_init_if2" 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:23.423 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:23.423 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:23.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:23.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:23.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:23.682 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:23.682 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:28:23.682 00:28:23.682 --- 10.0.0.3 ping statistics --- 00:28:23.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.682 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:23.682 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:23.682 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:28:23.682 00:28:23.682 --- 10.0.0.4 ping statistics --- 00:28:23.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.682 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:23.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:28:23.682 00:28:23.682 --- 10.0.0.1 ping statistics --- 00:28:23.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.682 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:23.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:23.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:28:23.682 00:28:23.682 --- 10.0.0.2 ping statistics --- 00:28:23.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.682 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:23.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:23.940 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:28:23.940 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:23.940 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:23.940 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:23.940 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=89164 00:28:23.940 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 89164 00:28:23.940 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:23.940 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 89164 ']' 00:28:23.940 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.940 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.940 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.940 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.940 11:31:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:23.940 [2024-12-10 11:31:30.676488] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:28:23.940 [2024-12-10 11:31:30.676642] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.198 [2024-12-10 11:31:30.866572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:24.198 [2024-12-10 11:31:31.014181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.198 [2024-12-10 11:31:31.014258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.198 [2024-12-10 11:31:31.014286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.198 [2024-12-10 11:31:31.014317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.198 [2024-12-10 11:31:31.014335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.198 [2024-12-10 11:31:31.016456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.198 [2024-12-10 11:31:31.016472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.456 [2024-12-10 11:31:31.231306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:25.021 11:31:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.021 11:31:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:28:25.021 11:31:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:25.021 11:31:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:25.021 11:31:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:25.021 11:31:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.021 11:31:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:25.021 11:31:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:25.279 [2024-12-10 11:31:31.933537] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.279 11:31:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:25.537 Malloc0 00:28:25.537 11:31:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:26.102 11:31:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:26.102 11:31:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:26.361 [2024-12-10 11:31:33.158068] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:26.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:26.361 11:31:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=89219 00:28:26.361 11:31:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:26.361 11:31:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 89219 /var/tmp/bdevperf.sock 00:28:26.361 11:31:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 89219 ']' 00:28:26.361 11:31:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:26.361 11:31:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.361 11:31:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:26.361 11:31:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.361 11:31:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:26.619 [2024-12-10 11:31:33.290803] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:28:26.619 [2024-12-10 11:31:33.290998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89219 ] 00:28:26.878 [2024-12-10 11:31:33.479220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.878 [2024-12-10 11:31:33.602966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.150 [2024-12-10 11:31:33.785938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:27.732 11:31:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:27.732 11:31:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:28:27.732 11:31:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:27.992 11:31:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:28.250 NVMe0n1 00:28:28.250 11:31:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:28.250 11:31:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=89243 00:28:28.250 11:31:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:28:28.508 Running I/O for 10 seconds... 00:28:29.444 11:31:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:29.705 5524.00 IOPS, 21.58 MiB/s [2024-12-10T11:31:36.531Z] [2024-12-10 11:31:36.313588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.705 [2024-12-10 11:31:36.313659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.313687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.705 [2024-12-10 11:31:36.313704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.313722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.705 [2024-12-10 11:31:36.313738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.313756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.705 [2024-12-10 11:31:36.313770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.313787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:28:29.705 [2024-12-10 11:31:36.314111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.705 [2024-12-10 11:31:36.314159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.705 [2024-12-10 11:31:36.314779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.705 [2024-12-10 11:31:36.314796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.314813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.314831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.314849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.314866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.314884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.314901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.314918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.314938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.314958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.314977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.314994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.315981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.315999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.316017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.316034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.316052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.316069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.316086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.316104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.316121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.316139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.316156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.316174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.316191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.316208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.316226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.316244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.316263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.706 [2024-12-10 11:31:36.316280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.706 [2024-12-10 11:31:36.316298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.316969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.316986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.707 [2024-12-10 11:31:36.317751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.707 [2024-12-10 11:31:36.317768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.708 [2024-12-10 11:31:36.317785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.317802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.708 [2024-12-10 11:31:36.317820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.317837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.708 [2024-12-10 11:31:36.317856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.317874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.708 [2024-12-10 11:31:36.317891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.317910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.708 [2024-12-10 11:31:36.317928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.317945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.708 [2024-12-10 11:31:36.317972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.317991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.708 [2024-12-10 11:31:36.318010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.708 [2024-12-10 11:31:36.318061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.708 [2024-12-10 11:31:36.318096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.708 [2024-12-10 11:31:36.318131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.708 [2024-12-10 11:31:36.318166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.708 [2024-12-10 11:31:36.318200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.708 [2024-12-10 11:31:36.318234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.708 [2024-12-10 11:31:36.318269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.708 [2024-12-10 11:31:36.318309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.708 [2024-12-10 11:31:36.318345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.708 [2024-12-10 11:31:36.318397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.708 [2024-12-10 11:31:36.318432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.708 [2024-12-10 11:31:36.318468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.708 [2024-12-10 11:31:36.318503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.708 [2024-12-10 11:31:36.318539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.708 [2024-12-10 11:31:36.318574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.708 [2024-12-10 11:31:36.318610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.708 [2024-12-10 11:31:36.318645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.708 [2024-12-10 11:31:36.318679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.708 [2024-12-10 11:31:36.318714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.708 [2024-12-10 11:31:36.318748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.708 [2024-12-10 11:31:36.318783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.708 [2024-12-10 11:31:36.318820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.318836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:28:29.708 [2024-12-10 11:31:36.318859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.708 [2024-12-10 11:31:36.318874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.708 [2024-12-10 11:31:36.318893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53400 len:8 PRP1 0x0 PRP2 0x0 00:28:29.708 [2024-12-10 11:31:36.318909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.708 [2024-12-10 11:31:36.319478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:29.708 [2024-12-10 11:31:36.319541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:29.708 [2024-12-10 11:31:36.319703] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.708 [2024-12-10 11:31:36.319743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:28:29.708 [2024-12-10 11:31:36.319764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:28:29.708 [2024-12-10 11:31:36.319799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:29.708 [2024-12-10 11:31:36.319830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:29.708 [2024-12-10 11:31:36.319849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:29.708 [2024-12-10 11:31:36.319866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:29.708 [2024-12-10 11:31:36.319884] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:29.708 [2024-12-10 11:31:36.319902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:29.708 11:31:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:28:31.580 3274.00 IOPS, 12.79 MiB/s [2024-12-10T11:31:38.406Z] 2182.67 IOPS, 8.53 MiB/s [2024-12-10T11:31:38.406Z] [2024-12-10 11:31:38.320087] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.580 [2024-12-10 11:31:38.320181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:28:31.580 [2024-12-10 11:31:38.320207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:28:31.580 [2024-12-10 11:31:38.320254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:31.580 [2024-12-10 11:31:38.320287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:31.580 [2024-12-10 11:31:38.320308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:31.580 [2024-12-10 11:31:38.320325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:31.580 [2024-12-10 11:31:38.320363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:31.580 [2024-12-10 11:31:38.320384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:31.580 11:31:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:28:31.580 11:31:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:31.580 11:31:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:31.838 11:31:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:28:31.838 11:31:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:28:31.838 11:31:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:31.838 11:31:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:32.404 11:31:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:28:32.404 11:31:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:28:33.338 1637.00 IOPS, 6.39 MiB/s [2024-12-10T11:31:40.423Z] 1309.60 IOPS, 5.12 MiB/s [2024-12-10T11:31:40.423Z] [2024-12-10 11:31:40.320630] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-10 11:31:40.320727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:28:33.597 [2024-12-10 11:31:40.320753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:28:33.597 [2024-12-10 11:31:40.320801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:33.597 [2024-12-10 11:31:40.320832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:33.597 [2024-12-10 11:31:40.320852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:33.597 [2024-12-10 11:31:40.320869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:33.597 [2024-12-10 11:31:40.320894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:33.597 [2024-12-10 11:31:40.320912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:35.465 1091.33 IOPS, 4.26 MiB/s [2024-12-10T11:31:42.552Z] 935.43 IOPS, 3.65 MiB/s [2024-12-10T11:31:42.552Z] [2024-12-10 11:31:42.320989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:35.727 [2024-12-10 11:31:42.321064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:35.727 [2024-12-10 11:31:42.321094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:35.727 [2024-12-10 11:31:42.321111] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:28:35.727 [2024-12-10 11:31:42.321136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:36.567 818.50 IOPS, 3.20 MiB/s 00:28:36.567 Latency(us) 00:28:36.567 [2024-12-10T11:31:43.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.567 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:36.567 Verification LBA range: start 0x0 length 0x4000 00:28:36.567 NVMe0n1 : 8.20 798.77 3.12 15.61 0.00 156908.80 4825.83 7015926.69 00:28:36.567 [2024-12-10T11:31:43.393Z] =================================================================================================================== 00:28:36.567 [2024-12-10T11:31:43.393Z] Total : 798.77 3.12 15.61 0.00 156908.80 4825.83 7015926.69 00:28:36.567 { 00:28:36.567 "results": [ 00:28:36.567 { 00:28:36.567 "job": "NVMe0n1", 00:28:36.567 "core_mask": "0x4", 00:28:36.567 "workload": "verify", 00:28:36.567 "status": "finished", 00:28:36.567 "verify_range": { 00:28:36.567 "start": 0, 00:28:36.567 "length": 16384 00:28:36.567 }, 00:28:36.567 "queue_depth": 128, 00:28:36.567 "io_size": 4096, 00:28:36.567 "runtime": 8.197613, 00:28:36.567 "iops": 798.7691051041321, 00:28:36.567 "mibps": 3.120191816813016, 00:28:36.567 "io_failed": 128, 00:28:36.567 "io_timeout": 0, 00:28:36.567 "avg_latency_us": 156908.80453183723, 00:28:36.567 "min_latency_us": 4825.832727272727, 00:28:36.567 "max_latency_us": 7015926.69090909 00:28:36.567 } 00:28:36.567 ], 00:28:36.567 "core_count": 1 00:28:36.567 } 00:28:37.134 11:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:28:37.134 11:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:37.134 11:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:37.700 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:28:37.700 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:28:37.700 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:37.700 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:37.958 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:28:37.958 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 89243 00:28:37.958 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 89219 00:28:37.958 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 89219 ']' 00:28:37.958 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 89219 00:28:37.958 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:28:37.958 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.958 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89219 00:28:37.958 killing process with pid 89219 00:28:37.958 Received shutdown signal, test time was about 9.447707 seconds 00:28:37.958 00:28:37.959 Latency(us) 00:28:37.959 [2024-12-10T11:31:44.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.959 [2024-12-10T11:31:44.785Z] =================================================================================================================== 00:28:37.959 [2024-12-10T11:31:44.785Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:37.959 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:37.959 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:37.959 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89219' 00:28:37.959 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 89219 00:28:37.959 11:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 89219 00:28:38.892 11:31:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:39.151 [2024-12-10 11:31:45.844863] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:39.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:39.151 11:31:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=89367 00:28:39.151 11:31:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:39.151 11:31:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 89367 /var/tmp/bdevperf.sock 00:28:39.151 11:31:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 89367 ']' 00:28:39.151 11:31:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:39.151 11:31:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.151 11:31:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:39.151 11:31:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.151 11:31:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:39.410 [2024-12-10 11:31:45.979059] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:28:39.410 [2024-12-10 11:31:45.979231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89367 ] 00:28:39.410 [2024-12-10 11:31:46.163121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.668 [2024-12-10 11:31:46.335750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.926 [2024-12-10 11:31:46.553995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:40.184 11:31:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.185 11:31:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:28:40.185 11:31:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:40.443 11:31:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:28:41.010 NVMe0n1 00:28:41.010 11:31:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:41.010 11:31:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=89395 00:28:41.010 11:31:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:28:41.010 Running I/O for 10 seconds... 00:28:41.946 11:31:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:42.206 7121.00 IOPS, 27.82 MiB/s [2024-12-10T11:31:49.032Z] [2024-12-10 11:31:48.852954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.206 [2024-12-10 11:31:48.853036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.207 [2024-12-10 11:31:48.853495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.207 [2024-12-10 11:31:48.853527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.207 [2024-12-10 11:31:48.853559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.207 [2024-12-10 11:31:48.853593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.207 [2024-12-10 11:31:48.853628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.207 [2024-12-10 11:31:48.853662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.207 [2024-12-10 11:31:48.853694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.207 [2024-12-10 11:31:48.853725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.853983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.853999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.854015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.854031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.854048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.854064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.854080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.854096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.854116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.854134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.854150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.854166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.854182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.854197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.854214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.854230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.854246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.854262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.854278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.854294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.207 [2024-12-10 11:31:48.854310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.854326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.207 [2024-12-10 11:31:48.854343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.854374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.207 [2024-12-10 11:31:48.854394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.207 [2024-12-10 11:31:48.854411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.207 [2024-12-10 11:31:48.854427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.208 [2024-12-10 11:31:48.854471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.208 [2024-12-10 11:31:48.854510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.208 [2024-12-10 11:31:48.854548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.208 [2024-12-10 11:31:48.854587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.208 [2024-12-10 11:31:48.854639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.854674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.854709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.854742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.854773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.854805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.854837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.854869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.854901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.854938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.854972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.854988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.208 [2024-12-10 11:31:48.855266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.208 [2024-12-10 11:31:48.855298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.208 [2024-12-10 11:31:48.855330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.208 [2024-12-10 11:31:48.855382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.208 [2024-12-10 11:31:48.855416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.208 [2024-12-10 11:31:48.855448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.208 [2024-12-10 11:31:48.855482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.208 [2024-12-10 11:31:48.855517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.208 [2024-12-10 11:31:48.855816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.208 [2024-12-10 11:31:48.855833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.855849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.855865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.855882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.855898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.855914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.855932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.855949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.855965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.855981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.855997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.856030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.856064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.856097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.856130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.856162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.856701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.856741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.856774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.856830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.856873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.856908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.856940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.856972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.856988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.857005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.857021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.857037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.857054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.857070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.857085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.857101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.857117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.857133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.857150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.209 [2024-12-10 11:31:48.857168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.857184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.209 [2024-12-10 11:31:48.857200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.209 [2024-12-10 11:31:48.857216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.210 [2024-12-10 11:31:48.857232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.210 [2024-12-10 11:31:48.857247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.210 [2024-12-10 11:31:48.857265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.210 [2024-12-10 11:31:48.857284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.210 [2024-12-10 11:31:48.857301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.210 [2024-12-10 11:31:48.857318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.210 [2024-12-10 11:31:48.857334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.210 [2024-12-10 11:31:48.857364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.210 [2024-12-10 11:31:48.857384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.210 [2024-12-10 11:31:48.857400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.210 [2024-12-10 11:31:48.857417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.210 [2024-12-10 11:31:48.857469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:42.210 [2024-12-10 11:31:48.857498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:42.210 [2024-12-10 11:31:48.857514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67392 len:8 PRP1 0x0 PRP2 0x0 00:28:42.210 [2024-12-10 11:31:48.857531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.210 [2024-12-10 11:31:48.857897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.210 [2024-12-10 11:31:48.857929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.210 [2024-12-10 11:31:48.857951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.210 [2024-12-10 11:31:48.857966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.210 [2024-12-10 11:31:48.857983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.210 [2024-12-10 11:31:48.857996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.210 [2024-12-10 11:31:48.858013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.210 [2024-12-10 11:31:48.858027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.210 [2024-12-10 11:31:48.858045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:28:42.210 [2024-12-10 11:31:48.858313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.210 [2024-12-10 11:31:48.858380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:42.210 [2024-12-10 11:31:48.858535] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.210 [2024-12-10 11:31:48.858574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:28:42.210 [2024-12-10 11:31:48.858595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:28:42.210 [2024-12-10 11:31:48.858634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:42.210 [2024-12-10 11:31:48.858663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:42.210 [2024-12-10 11:31:48.858679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:42.210 [2024-12-10 11:31:48.858697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:42.210 [2024-12-10 11:31:48.858714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:42.210 [2024-12-10 11:31:48.858732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:42.210 11:31:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:28:43.145 4188.50 IOPS, 16.36 MiB/s [2024-12-10T11:31:49.971Z] [2024-12-10 11:31:49.858928] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.145 [2024-12-10 11:31:49.859018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:28:43.145 [2024-12-10 11:31:49.859047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:28:43.145 [2024-12-10 11:31:49.859089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:43.145 [2024-12-10 11:31:49.859124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.145 [2024-12-10 11:31:49.859140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.145 [2024-12-10 11:31:49.859163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.145 [2024-12-10 11:31:49.859181] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.145 [2024-12-10 11:31:49.859200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.145 11:31:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:43.403 [2024-12-10 11:31:50.181712] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:43.403 11:31:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 89395 00:28:44.250 2792.33 IOPS, 10.91 MiB/s [2024-12-10T11:31:51.076Z] [2024-12-10 11:31:50.880080] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:28:46.121 2094.25 IOPS, 8.18 MiB/s [2024-12-10T11:31:53.882Z] 2901.80 IOPS, 11.34 MiB/s [2024-12-10T11:31:54.817Z] 3660.33 IOPS, 14.30 MiB/s [2024-12-10T11:31:55.752Z] 4201.43 IOPS, 16.41 MiB/s [2024-12-10T11:31:57.126Z] 4615.25 IOPS, 18.03 MiB/s [2024-12-10T11:31:58.059Z] 4909.56 IOPS, 19.18 MiB/s [2024-12-10T11:31:58.059Z] 5140.20 IOPS, 20.08 MiB/s 00:28:51.233 Latency(us) 00:28:51.233 [2024-12-10T11:31:58.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.233 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:51.233 Verification LBA range: start 0x0 length 0x4000 00:28:51.233 NVMe0n1 : 10.01 5144.58 20.10 0.00 0.00 24826.25 2532.07 3019898.88 00:28:51.233 [2024-12-10T11:31:58.059Z] =================================================================================================================== 00:28:51.233 [2024-12-10T11:31:58.059Z] Total : 5144.58 20.10 0.00 0.00 24826.25 2532.07 3019898.88 00:28:51.233 { 00:28:51.233 "results": [ 00:28:51.233 { 00:28:51.233 "job": "NVMe0n1", 00:28:51.233 "core_mask": "0x4", 00:28:51.233 "workload": "verify", 00:28:51.233 "status": "finished", 00:28:51.233 "verify_range": { 00:28:51.233 "start": 0, 00:28:51.233 "length": 16384 00:28:51.233 }, 00:28:51.233 "queue_depth": 128, 00:28:51.233 "io_size": 4096, 00:28:51.233 "runtime": 10.013248, 00:28:51.233 "iops": 5144.584454514659, 00:28:51.233 "mibps": 20.096033025447888, 00:28:51.233 "io_failed": 0, 00:28:51.233 "io_timeout": 0, 00:28:51.233 "avg_latency_us": 24826.2476779128, 00:28:51.233 "min_latency_us": 2532.072727272727, 00:28:51.233 "max_latency_us": 3019898.88 00:28:51.233 } 00:28:51.233 ], 00:28:51.233 "core_count": 1 00:28:51.233 } 00:28:51.233 11:31:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=89496 00:28:51.233 11:31:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:51.233 11:31:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:28:51.233 Running I/O for 10 seconds... 00:28:52.169 11:31:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:52.431 5524.00 IOPS, 21.58 MiB/s [2024-12-10T11:31:59.257Z] [2024-12-10 11:31:59.063082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.063848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.064137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.064399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.064752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.064996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.065247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.065498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.065569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.065740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.065835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.065922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.066003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.066192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.066287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.066368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.066468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.066492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.431 [2024-12-10 11:31:59.066542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.431 [2024-12-10 11:31:59.066564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.431 [2024-12-10 11:31:59.066578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.431 [2024-12-10 11:31:59.066592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.431 [2024-12-10 11:31:59.066605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.431 [2024-12-10 11:31:59.066619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.431 [2024-12-10 11:31:59.066632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.431 [2024-12-10 11:31:59.066645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.067213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.067325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.067428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.067498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.067698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.067792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.067880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.067947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.068022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.068255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.068368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.068473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.068553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.068737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.068850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.068918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.068999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.069209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.069328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.069419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.069490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.069677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.069775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.069862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.069943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.070119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.070218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.070302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.070405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.070628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.070739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.070807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.070979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.071086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.071160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.071234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.071443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.431 [2024-12-10 11:31:59.071537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.071604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.071686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.071790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.071981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.072093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.072170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.072379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.072488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.072560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.072623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.072698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.072880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.072995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.073074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.073143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.073389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.073504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.073590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.073659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.073733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.073966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.074064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.074131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.074210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.074287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.074389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.074660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.074773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.074848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.074927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.075150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.075244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.075312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.075414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.075489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.075697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.075798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.075883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.076067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.076174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.076266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.076458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:52.432 [2024-12-10 11:31:59.076484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.076647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.076725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.076796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.076888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.076954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.077024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.077089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.077165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.077241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.077303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.077381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.077472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:28:52.432 [2024-12-10 11:31:59.077638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.432 [2024-12-10 11:31:59.077671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.432 [2024-12-10 11:31:59.077704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.432 [2024-12-10 11:31:59.077720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.432 [2024-12-10 11:31:59.077737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.432 [2024-12-10 11:31:59.077751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.432 [2024-12-10 11:31:59.077767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.432 [2024-12-10 11:31:59.077780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.432 [2024-12-10 11:31:59.077795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.432 [2024-12-10 11:31:59.077809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.432 [2024-12-10 11:31:59.077825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.432 [2024-12-10 11:31:59.077838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.432 [2024-12-10 11:31:59.077854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.432 [2024-12-10 11:31:59.077867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.432 [2024-12-10 11:31:59.077883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.432 [2024-12-10 11:31:59.077896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.432 [2024-12-10 11:31:59.077912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.432 [2024-12-10 11:31:59.077925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.432 [2024-12-10 11:31:59.077941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.432 [2024-12-10 11:31:59.077954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.432 [2024-12-10 11:31:59.077969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.432 [2024-12-10 11:31:59.077983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.432 [2024-12-10 11:31:59.077998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.432 [2024-12-10 11:31:59.078012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.432 [2024-12-10 11:31:59.078027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.432 [2024-12-10 11:31:59.078040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.432 [2024-12-10 11:31:59.078056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.432 [2024-12-10 11:31:59.078070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.078984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.078997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.079013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.079027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.079043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.079056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.079072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.079086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.079101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.079115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.079130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.079143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.079159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.079172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.079188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.079201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.079217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.079230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.079246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.079259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.079274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.433 [2024-12-10 11:31:59.079288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.433 [2024-12-10 11:31:59.079303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.079957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.079984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.434 [2024-12-10 11:31:59.080706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.434 [2024-12-10 11:31:59.080719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.080734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.080747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.080763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.080776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.080792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.080805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.080820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.080833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.080849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.080862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.080877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.080890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.080906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.080919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.080934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.080948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.080964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.080977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.080993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.081007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.081035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.081064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.081093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.081121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.081151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.081179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.435 [2024-12-10 11:31:59.081208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.435 [2024-12-10 11:31:59.081238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.435 [2024-12-10 11:31:59.081266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.435 [2024-12-10 11:31:59.081295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.435 [2024-12-10 11:31:59.081324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.435 [2024-12-10 11:31:59.081371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.435 [2024-12-10 11:31:59.081404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.435 [2024-12-10 11:31:59.081432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.435 [2024-12-10 11:31:59.081461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.435 [2024-12-10 11:31:59.081489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.435 [2024-12-10 11:31:59.081518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.435 [2024-12-10 11:31:59.081547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.435 [2024-12-10 11:31:59.081576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.435 [2024-12-10 11:31:59.081604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.435 [2024-12-10 11:31:59.081632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.435 [2024-12-10 11:31:59.081660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.081675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bc80 is same with the state(6) to be set 00:28:52.435 [2024-12-10 11:31:59.081693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:52.435 [2024-12-10 11:31:59.081705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:52.435 [2024-12-10 11:31:59.081718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53408 len:8 PRP1 0x0 PRP2 0x0 00:28:52.435 [2024-12-10 11:31:59.081731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.435 [2024-12-10 11:31:59.082253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:52.436 [2024-12-10 11:31:59.082418] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.436 [2024-12-10 11:31:59.082453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:28:52.436 [2024-12-10 11:31:59.082471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:28:52.436 [2024-12-10 11:31:59.082501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:52.436 [2024-12-10 11:31:59.082527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:28:52.436 [2024-12-10 11:31:59.082546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:28:52.436 [2024-12-10 11:31:59.082561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:52.436 [2024-12-10 11:31:59.082577] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:28:52.436 [2024-12-10 11:31:59.082592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:52.436 11:31:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:28:53.371 3282.00 IOPS, 12.82 MiB/s [2024-12-10T11:32:00.197Z] [2024-12-10 11:32:00.082808] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.371 [2024-12-10 11:32:00.083130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:28:53.371 [2024-12-10 11:32:00.083303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:28:53.371 [2024-12-10 11:32:00.083639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:53.371 [2024-12-10 11:32:00.083720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:28:53.371 [2024-12-10 11:32:00.083742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:28:53.371 [2024-12-10 11:32:00.083759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:53.371 [2024-12-10 11:32:00.083778] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:28:53.371 [2024-12-10 11:32:00.083794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:54.305 2188.00 IOPS, 8.55 MiB/s [2024-12-10T11:32:01.131Z] [2024-12-10 11:32:01.083970] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.305 [2024-12-10 11:32:01.084044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:28:54.305 [2024-12-10 11:32:01.084067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:28:54.305 [2024-12-10 11:32:01.084105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:54.305 [2024-12-10 11:32:01.084154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:28:54.305 [2024-12-10 11:32:01.084172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:28:54.305 [2024-12-10 11:32:01.084188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:54.305 [2024-12-10 11:32:01.084205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:28:54.305 [2024-12-10 11:32:01.084221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:55.497 1641.00 IOPS, 6.41 MiB/s [2024-12-10T11:32:02.323Z] [2024-12-10 11:32:02.087329] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.497 [2024-12-10 11:32:02.087413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:28:55.497 [2024-12-10 11:32:02.087436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:28:55.497 [2024-12-10 11:32:02.087743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:55.497 [2024-12-10 11:32:02.088020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:28:55.497 [2024-12-10 11:32:02.088043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:28:55.497 [2024-12-10 11:32:02.088059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:55.497 [2024-12-10 11:32:02.088075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:28:55.497 [2024-12-10 11:32:02.088091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:55.497 11:32:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:55.756 [2024-12-10 11:32:02.387050] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:55.756 11:32:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 89496 00:28:56.322 1312.80 IOPS, 5.13 MiB/s [2024-12-10T11:32:03.148Z] [2024-12-10 11:32:03.121504] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:28:58.189 1991.17 IOPS, 7.78 MiB/s [2024-12-10T11:32:05.947Z] 2717.00 IOPS, 10.61 MiB/s [2024-12-10T11:32:06.879Z] 3283.38 IOPS, 12.83 MiB/s [2024-12-10T11:32:07.874Z] 3695.89 IOPS, 14.44 MiB/s [2024-12-10T11:32:07.874Z] 4069.90 IOPS, 15.90 MiB/s 00:29:01.048 Latency(us) 00:29:01.048 [2024-12-10T11:32:07.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.048 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:01.048 Verification LBA range: start 0x0 length 0x4000 00:29:01.048 NVMe0n1 : 10.01 4074.15 15.91 3410.39 0.00 17059.02 811.75 3050402.91 00:29:01.048 [2024-12-10T11:32:07.874Z] =================================================================================================================== 00:29:01.048 [2024-12-10T11:32:07.874Z] Total : 4074.15 15.91 3410.39 0.00 17059.02 0.00 3050402.91 00:29:01.048 { 00:29:01.048 "results": [ 00:29:01.048 { 00:29:01.048 "job": "NVMe0n1", 00:29:01.048 "core_mask": "0x4", 00:29:01.048 "workload": "verify", 00:29:01.048 "status": "finished", 00:29:01.048 "verify_range": { 00:29:01.048 "start": 0, 00:29:01.048 "length": 16384 00:29:01.048 }, 00:29:01.048 "queue_depth": 128, 00:29:01.048 "io_size": 4096, 00:29:01.048 "runtime": 10.011179, 00:29:01.048 "iops": 4074.1455127313175, 00:29:01.048 "mibps": 15.914630909106709, 00:29:01.048 "io_failed": 34142, 00:29:01.048 "io_timeout": 0, 00:29:01.048 "avg_latency_us": 17059.015402071538, 00:29:01.048 "min_latency_us": 811.7527272727273, 00:29:01.048 "max_latency_us": 3050402.909090909 00:29:01.048 } 00:29:01.048 ], 00:29:01.048 "core_count": 1 00:29:01.048 } 00:29:01.306 11:32:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 89367 00:29:01.306 11:32:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 89367 ']' 00:29:01.306 11:32:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 89367 00:29:01.306 11:32:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:29:01.306 11:32:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.306 11:32:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89367 00:29:01.306 11:32:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:01.306 11:32:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:01.306 killing process with pid 89367 00:29:01.306 Received shutdown signal, test time was about 10.000000 seconds 00:29:01.306 00:29:01.306 Latency(us) 00:29:01.306 [2024-12-10T11:32:08.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.306 [2024-12-10T11:32:08.132Z] =================================================================================================================== 00:29:01.306 [2024-12-10T11:32:08.132Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.306 11:32:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89367' 00:29:01.306 11:32:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 89367 00:29:01.306 11:32:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 89367 00:29:02.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:02.238 11:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=89616 00:29:02.238 11:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:29:02.238 11:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 89616 /var/tmp/bdevperf.sock 00:29:02.238 11:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 89616 ']' 00:29:02.238 11:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:02.238 11:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.238 11:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:02.238 11:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.238 11:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:02.238 [2024-12-10 11:32:09.014363] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:29:02.238 [2024-12-10 11:32:09.014508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89616 ] 00:29:02.496 [2024-12-10 11:32:09.188571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.496 [2024-12-10 11:32:09.290788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.755 [2024-12-10 11:32:09.469471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:03.320 11:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.320 11:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:29:03.320 11:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=89628 00:29:03.320 11:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89616 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:29:03.320 11:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:29:03.578 11:32:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:29:03.836 NVMe0n1 00:29:03.836 11:32:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=89675 00:29:03.836 11:32:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:03.836 11:32:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:29:04.094 Running I/O for 10 seconds... 00:29:05.029 11:32:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:05.290 11557.00 IOPS, 45.14 MiB/s [2024-12-10T11:32:12.116Z] [2024-12-10 11:32:11.915729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.915801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.915819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.915834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.915847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.915861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.915873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.915887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.915898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.915912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.915925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.915939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.915951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.915992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.290 [2024-12-10 11:32:11.916299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.916988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:05.291 [2024-12-10 11:32:11.917194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.291 [2024-12-10 11:32:11.917222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-12-10 11:32:11.917236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same id:0 cdw10:00000000 cdw11:00000000 00:29:05.291 with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.291 [2024-12-10 11:32:11.917263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:05.291 [2024-12-10 11:32:11.917275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.291 [2024-12-10 11:32:11.917289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:05.291 [2024-12-10 11:32:11.917301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-10 11:32:11.917316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.291 with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.291 [2024-12-10 11:32:11.917369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:05.292 [2024-12-10 11:32:11.917674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.917702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.917756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.917772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.917792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.917807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.917826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.917840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.917859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.917873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.917892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.917906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.917928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.917943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.917963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.917977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.917995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.292 [2024-12-10 11:32:11.918903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.292 [2024-12-10 11:32:11.918924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.918939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.918972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.918986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.919982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.919999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.920020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.920128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.920315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.920422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.920603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.920752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.920916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.921058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.921149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.921297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.921406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.921554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.921709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.921930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.922015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.922180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.922253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.922322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.922546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.922571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.922593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.922607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.922627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.922642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.922663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.922678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.922699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.293 [2024-12-10 11:32:11.922714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.293 [2024-12-10 11:32:11.922733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.922747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.922766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.922782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.922801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.922816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.922835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.922849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.922867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.922882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.922900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.922914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.922933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.922947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.922967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.922982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.294 [2024-12-10 11:32:11.923951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.294 [2024-12-10 11:32:11.923966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.923984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.923998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.924016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.924030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.924051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.924065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.924083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.924097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.924118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.924132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.924150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.924164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.924182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.924197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.924215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.924229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.924247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.924261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.924280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.924294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.924313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.924328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.924346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.924687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.924853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.924931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.925074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.925211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.925296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.925470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.925578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.925725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.925816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.926015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.926105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.295 [2024-12-10 11:32:11.926251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.926332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:29:05.295 [2024-12-10 11:32:11.926479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:05.295 [2024-12-10 11:32:11.926609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:05.295 [2024-12-10 11:32:11.926670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52928 len:8 PRP1 0x0 PRP2 0x0 00:29:05.295 [2024-12-10 11:32:11.926821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.295 [2024-12-10 11:32:11.927211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:05.295 [2024-12-10 11:32:11.927704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:29:05.295 [2024-12-10 11:32:11.928026] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.295 [2024-12-10 11:32:11.928064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:29:05.295 [2024-12-10 11:32:11.928086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:29:05.295 [2024-12-10 11:32:11.928117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:05.295 [2024-12-10 11:32:11.928151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:29:05.295 [2024-12-10 11:32:11.928166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:29:05.295 [2024-12-10 11:32:11.928185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:29:05.295 [2024-12-10 11:32:11.928206] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:29:05.295 [2024-12-10 11:32:11.928224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:29:05.295 11:32:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 89675 00:29:07.167 6540.50 IOPS, 25.55 MiB/s [2024-12-10T11:32:13.993Z] 4360.33 IOPS, 17.03 MiB/s [2024-12-10T11:32:13.993Z] [2024-12-10 11:32:13.928482] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-12-10 11:32:13.928746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:29:07.167 [2024-12-10 11:32:13.928936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:29:07.167 [2024-12-10 11:32:13.929242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:07.167 [2024-12-10 11:32:13.929504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:29:07.167 [2024-12-10 11:32:13.929529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:29:07.167 [2024-12-10 11:32:13.929550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:29:07.167 [2024-12-10 11:32:13.929569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:29:07.167 [2024-12-10 11:32:13.929590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:29:09.039 3270.25 IOPS, 12.77 MiB/s [2024-12-10T11:32:16.123Z] 2616.20 IOPS, 10.22 MiB/s [2024-12-10T11:32:16.123Z] [2024-12-10 11:32:15.929797] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.297 [2024-12-10 11:32:15.930010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:29:09.297 [2024-12-10 11:32:15.930053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:29:09.297 [2024-12-10 11:32:15.930097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:09.298 [2024-12-10 11:32:15.930131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:29:09.298 [2024-12-10 11:32:15.930148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:29:09.298 [2024-12-10 11:32:15.930171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:29:09.298 [2024-12-10 11:32:15.930190] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:29:09.298 [2024-12-10 11:32:15.930212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:29:11.168 2180.17 IOPS, 8.52 MiB/s [2024-12-10T11:32:17.994Z] 1868.71 IOPS, 7.30 MiB/s [2024-12-10T11:32:17.994Z] [2024-12-10 11:32:17.930322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:29:11.168 [2024-12-10 11:32:17.930431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:29:11.168 [2024-12-10 11:32:17.930453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:29:11.168 [2024-12-10 11:32:17.930477] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:29:11.168 [2024-12-10 11:32:17.930497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:29:12.363 1635.12 IOPS, 6.39 MiB/s 00:29:12.363 Latency(us) 00:29:12.363 [2024-12-10T11:32:19.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.363 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:29:12.363 NVMe0n1 : 8.16 1603.43 6.26 15.69 0.00 79113.99 10724.07 7046430.72 00:29:12.363 [2024-12-10T11:32:19.189Z] =================================================================================================================== 00:29:12.363 [2024-12-10T11:32:19.189Z] Total : 1603.43 6.26 15.69 0.00 79113.99 10724.07 7046430.72 00:29:12.363 { 00:29:12.363 "results": [ 00:29:12.363 { 00:29:12.363 "job": "NVMe0n1", 00:29:12.363 "core_mask": "0x4", 00:29:12.363 "workload": "randread", 00:29:12.363 "status": "finished", 00:29:12.363 "queue_depth": 128, 00:29:12.363 "io_size": 4096, 00:29:12.363 "runtime": 8.158112, 00:29:12.363 "iops": 1603.4347162676854, 00:29:12.363 "mibps": 6.263416860420646, 00:29:12.363 "io_failed": 128, 00:29:12.363 "io_timeout": 0, 00:29:12.363 "avg_latency_us": 79113.99122196299, 00:29:12.363 "min_latency_us": 10724.072727272727, 00:29:12.363 "max_latency_us": 7046430.72 00:29:12.363 } 00:29:12.363 ], 00:29:12.363 "core_count": 1 00:29:12.363 } 00:29:12.363 11:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:12.363 Attaching 5 probes... 00:29:12.363 1489.334531: reset bdev controller NVMe0 00:29:12.363 1489.585481: reconnect bdev controller NVMe0 00:29:12.363 3489.934853: reconnect delay bdev controller NVMe0 00:29:12.363 3489.979273: reconnect bdev controller NVMe0 00:29:12.363 5491.308259: reconnect delay bdev controller NVMe0 00:29:12.363 5491.333845: reconnect bdev controller NVMe0 00:29:12.363 7491.933452: reconnect delay bdev controller NVMe0 00:29:12.363 7491.965829: reconnect bdev controller NVMe0 00:29:12.363 11:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:29:12.363 11:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:29:12.363 11:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 89628 00:29:12.363 11:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:12.363 11:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 89616 00:29:12.363 11:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 89616 ']' 00:29:12.363 11:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 89616 00:29:12.363 11:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:29:12.363 11:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:12.363 11:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89616 00:29:12.363 killing process with pid 89616 00:29:12.363 Received shutdown signal, test time was about 8.240943 seconds 00:29:12.363 00:29:12.363 Latency(us) 00:29:12.363 [2024-12-10T11:32:19.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.363 [2024-12-10T11:32:19.189Z] =================================================================================================================== 00:29:12.363 [2024-12-10T11:32:19.189Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.363 11:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:12.364 11:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:12.364 11:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89616' 00:29:12.364 11:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 89616 00:29:12.364 11:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 89616 00:29:13.299 11:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:13.557 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:29:13.557 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:29:13.557 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:13.557 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:29:13.557 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.557 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:29:13.557 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.557 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.557 rmmod nvme_tcp 00:29:13.815 rmmod nvme_fabrics 00:29:13.815 rmmod nvme_keyring 00:29:13.815 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.815 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:29:13.815 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:29:13.815 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 89164 ']' 00:29:13.815 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 89164 00:29:13.815 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 89164 ']' 00:29:13.815 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 89164 00:29:13.815 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:29:13.815 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.815 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89164 00:29:13.815 killing process with pid 89164 00:29:13.816 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:13.816 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:13.816 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89164' 00:29:13.816 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 89164 00:29:13.816 11:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 89164 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:15.217 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:15.218 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:15.218 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:15.218 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:15.218 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.218 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.218 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.218 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:29:15.218 ************************************ 00:29:15.218 END TEST nvmf_timeout 00:29:15.218 ************************************ 00:29:15.218 00:29:15.218 real 0m52.061s 00:29:15.218 user 2m31.784s 00:29:15.218 sys 0m5.643s 00:29:15.218 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.218 11:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:15.218 11:32:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:29:15.218 11:32:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:15.218 ************************************ 00:29:15.218 END TEST nvmf_host 00:29:15.218 ************************************ 00:29:15.218 00:29:15.218 real 6m44.784s 00:29:15.218 user 18m46.905s 00:29:15.218 sys 1m19.911s 00:29:15.218 11:32:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.218 11:32:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.218 11:32:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:15.218 11:32:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:29:15.218 ************************************ 00:29:15.218 END TEST nvmf_tcp 00:29:15.218 ************************************ 00:29:15.218 00:29:15.218 real 17m57.891s 00:29:15.218 user 46m48.837s 00:29:15.218 sys 4m10.018s 00:29:15.218 11:32:21 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.218 11:32:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:15.218 11:32:22 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:29:15.218 11:32:22 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:15.218 11:32:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:15.218 11:32:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.218 11:32:22 -- common/autotest_common.sh@10 -- # set +x 00:29:15.476 ************************************ 00:29:15.476 START TEST nvmf_dif 00:29:15.476 ************************************ 00:29:15.476 11:32:22 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:15.476 * Looking for test storage... 00:29:15.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:15.476 11:32:22 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:15.476 11:32:22 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:29:15.476 11:32:22 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:15.476 11:32:22 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:15.476 11:32:22 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.476 11:32:22 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.476 11:32:22 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.476 11:32:22 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.476 11:32:22 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.476 11:32:22 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.476 11:32:22 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.476 11:32:22 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.476 11:32:22 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.476 11:32:22 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:29:15.477 11:32:22 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.477 11:32:22 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:15.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.477 --rc genhtml_branch_coverage=1 00:29:15.477 --rc genhtml_function_coverage=1 00:29:15.477 --rc genhtml_legend=1 00:29:15.477 --rc geninfo_all_blocks=1 00:29:15.477 --rc geninfo_unexecuted_blocks=1 00:29:15.477 00:29:15.477 ' 00:29:15.477 11:32:22 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:15.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.477 --rc genhtml_branch_coverage=1 00:29:15.477 --rc genhtml_function_coverage=1 00:29:15.477 --rc genhtml_legend=1 00:29:15.477 --rc geninfo_all_blocks=1 00:29:15.477 --rc geninfo_unexecuted_blocks=1 00:29:15.477 00:29:15.477 ' 00:29:15.477 11:32:22 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:15.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.477 --rc genhtml_branch_coverage=1 00:29:15.477 --rc genhtml_function_coverage=1 00:29:15.477 --rc genhtml_legend=1 00:29:15.477 --rc geninfo_all_blocks=1 00:29:15.477 --rc geninfo_unexecuted_blocks=1 00:29:15.477 00:29:15.477 ' 00:29:15.477 11:32:22 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:15.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.477 --rc genhtml_branch_coverage=1 00:29:15.477 --rc genhtml_function_coverage=1 00:29:15.477 --rc genhtml_legend=1 00:29:15.477 --rc geninfo_all_blocks=1 00:29:15.477 --rc geninfo_unexecuted_blocks=1 00:29:15.477 00:29:15.477 ' 00:29:15.477 11:32:22 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.477 11:32:22 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.477 11:32:22 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.477 11:32:22 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.477 11:32:22 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.477 11:32:22 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:15.477 11:32:22 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:15.477 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:15.477 11:32:22 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:15.477 11:32:22 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:15.477 11:32:22 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:15.477 11:32:22 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:15.477 11:32:22 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.477 11:32:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:15.477 11:32:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:15.477 Cannot find device "nvmf_init_br" 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@162 -- # true 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:15.477 Cannot find device "nvmf_init_br2" 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@163 -- # true 00:29:15.477 11:32:22 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:15.736 Cannot find device "nvmf_tgt_br" 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@164 -- # true 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:15.736 Cannot find device "nvmf_tgt_br2" 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@165 -- # true 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:15.736 Cannot find device "nvmf_init_br" 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@166 -- # true 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:15.736 Cannot find device "nvmf_init_br2" 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@167 -- # true 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:15.736 Cannot find device "nvmf_tgt_br" 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@168 -- # true 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:15.736 Cannot find device "nvmf_tgt_br2" 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@169 -- # true 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:15.736 Cannot find device "nvmf_br" 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@170 -- # true 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:15.736 Cannot find device "nvmf_init_if" 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@171 -- # true 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:15.736 Cannot find device "nvmf_init_if2" 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@172 -- # true 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:15.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@173 -- # true 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:15.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@174 -- # true 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:15.736 11:32:22 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:15.995 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:15.995 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:29:15.995 00:29:15.995 --- 10.0.0.3 ping statistics --- 00:29:15.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.995 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:15.995 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:15.995 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:29:15.995 00:29:15.995 --- 10.0.0.4 ping statistics --- 00:29:15.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.995 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:15.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:15.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:29:15.995 00:29:15.995 --- 10.0.0.1 ping statistics --- 00:29:15.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.995 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:15.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:15.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:29:15.995 00:29:15.995 --- 10.0.0.2 ping statistics --- 00:29:15.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.995 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:29:15.995 11:32:22 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:16.254 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:16.254 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:16.254 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:16.254 11:32:23 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:16.254 11:32:23 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:16.254 11:32:23 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:16.254 11:32:23 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:16.254 11:32:23 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:16.254 11:32:23 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:16.513 11:32:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:16.513 11:32:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:16.513 11:32:23 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:16.513 11:32:23 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:16.513 11:32:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:16.513 11:32:23 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=90178 00:29:16.513 11:32:23 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 90178 00:29:16.513 11:32:23 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:16.513 11:32:23 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 90178 ']' 00:29:16.513 11:32:23 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.513 11:32:23 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.513 11:32:23 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.513 11:32:23 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.513 11:32:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:16.513 [2024-12-10 11:32:23.233446] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:29:16.513 [2024-12-10 11:32:23.233612] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.772 [2024-12-10 11:32:23.420241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.772 [2024-12-10 11:32:23.527013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.772 [2024-12-10 11:32:23.527084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.772 [2024-12-10 11:32:23.527106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.772 [2024-12-10 11:32:23.527131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.772 [2024-12-10 11:32:23.527146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.772 [2024-12-10 11:32:23.528358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.030 [2024-12-10 11:32:23.720014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:17.598 11:32:24 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.598 11:32:24 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:29:17.598 11:32:24 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:17.598 11:32:24 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:17.598 11:32:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:17.598 11:32:24 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.598 11:32:24 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:17.598 11:32:24 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:17.598 11:32:24 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.598 11:32:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:17.598 [2024-12-10 11:32:24.295325] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.598 11:32:24 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.598 11:32:24 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:17.598 11:32:24 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:17.598 11:32:24 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.598 11:32:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:17.598 ************************************ 00:29:17.598 START TEST fio_dif_1_default 00:29:17.598 ************************************ 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:17.598 bdev_null0 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:17.598 [2024-12-10 11:32:24.339542] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:17.598 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.599 { 00:29:17.599 "params": { 00:29:17.599 "name": "Nvme$subsystem", 00:29:17.599 "trtype": "$TEST_TRANSPORT", 00:29:17.599 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.599 "adrfam": "ipv4", 00:29:17.599 "trsvcid": "$NVMF_PORT", 00:29:17.599 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.599 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.599 "hdgst": ${hdgst:-false}, 00:29:17.599 "ddgst": ${ddgst:-false} 00:29:17.599 }, 00:29:17.599 "method": "bdev_nvme_attach_controller" 00:29:17.599 } 00:29:17.599 EOF 00:29:17.599 )") 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:17.599 "params": { 00:29:17.599 "name": "Nvme0", 00:29:17.599 "trtype": "tcp", 00:29:17.599 "traddr": "10.0.0.3", 00:29:17.599 "adrfam": "ipv4", 00:29:17.599 "trsvcid": "4420", 00:29:17.599 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:17.599 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:17.599 "hdgst": false, 00:29:17.599 "ddgst": false 00:29:17.599 }, 00:29:17.599 "method": "bdev_nvme_attach_controller" 00:29:17.599 }' 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:17.599 11:32:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:17.858 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:17.858 fio-3.35 00:29:17.858 Starting 1 thread 00:29:30.095 00:29:30.095 filename0: (groupid=0, jobs=1): err= 0: pid=90241: Tue Dec 10 11:32:35 2024 00:29:30.095 read: IOPS=6624, BW=25.9MiB/s (27.1MB/s)(259MiB/10001msec) 00:29:30.095 slat (usec): min=5, max=617, avg=11.84, stdev= 5.50 00:29:30.095 clat (usec): min=445, max=2112, avg=568.22, stdev=45.79 00:29:30.095 lat (usec): min=453, max=2129, avg=580.07, stdev=47.30 00:29:30.095 clat percentiles (usec): 00:29:30.095 | 1.00th=[ 482], 5.00th=[ 502], 10.00th=[ 519], 20.00th=[ 537], 00:29:30.095 | 30.00th=[ 545], 40.00th=[ 553], 50.00th=[ 570], 60.00th=[ 578], 00:29:30.095 | 70.00th=[ 586], 80.00th=[ 603], 90.00th=[ 619], 95.00th=[ 635], 00:29:30.095 | 99.00th=[ 693], 99.50th=[ 717], 99.90th=[ 799], 99.95th=[ 848], 00:29:30.095 | 99.99th=[ 1663] 00:29:30.095 bw ( KiB/s): min=24288, max=27296, per=100.00%, avg=26549.89, stdev=681.59, samples=19 00:29:30.095 iops : min= 6072, max= 6824, avg=6637.47, stdev=170.40, samples=19 00:29:30.095 lat (usec) : 500=4.48%, 750=95.25%, 1000=0.24% 00:29:30.095 lat (msec) : 2=0.03%, 4=0.01% 00:29:30.095 cpu : usr=86.31%, sys=11.74%, ctx=28, majf=0, minf=1074 00:29:30.095 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:30.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:30.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:30.095 issued rwts: total=66256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:30.095 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:30.095 00:29:30.095 Run status group 0 (all jobs): 00:29:30.096 READ: bw=25.9MiB/s (27.1MB/s), 25.9MiB/s-25.9MiB/s (27.1MB/s-27.1MB/s), io=259MiB (271MB), run=10001-10001msec 00:29:30.096 ----------------------------------------------------- 00:29:30.096 Suppressions used: 00:29:30.096 count bytes template 00:29:30.096 1 8 /usr/src/fio/parse.c 00:29:30.096 1 8 libtcmalloc_minimal.so 00:29:30.096 1 904 libcrypto.so 00:29:30.096 ----------------------------------------------------- 00:29:30.096 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.096 00:29:30.096 real 0m12.324s 00:29:30.096 user 0m10.507s 00:29:30.096 sys 0m1.527s 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:30.096 ************************************ 00:29:30.096 END TEST fio_dif_1_default 00:29:30.096 ************************************ 00:29:30.096 11:32:36 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:30.096 11:32:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:30.096 11:32:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:30.096 11:32:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:30.096 ************************************ 00:29:30.096 START TEST fio_dif_1_multi_subsystems 00:29:30.096 ************************************ 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:30.096 bdev_null0 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:30.096 [2024-12-10 11:32:36.718017] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:30.096 bdev_null1 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:30.096 { 00:29:30.096 "params": { 00:29:30.096 "name": "Nvme$subsystem", 00:29:30.096 "trtype": "$TEST_TRANSPORT", 00:29:30.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:30.096 "adrfam": "ipv4", 00:29:30.096 "trsvcid": "$NVMF_PORT", 00:29:30.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:30.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:30.096 "hdgst": ${hdgst:-false}, 00:29:30.096 "ddgst": ${ddgst:-false} 00:29:30.096 }, 00:29:30.096 "method": "bdev_nvme_attach_controller" 00:29:30.096 } 00:29:30.096 EOF 00:29:30.096 )") 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:30.096 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:30.096 { 00:29:30.096 "params": { 00:29:30.096 "name": "Nvme$subsystem", 00:29:30.096 "trtype": "$TEST_TRANSPORT", 00:29:30.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:30.096 "adrfam": "ipv4", 00:29:30.096 "trsvcid": "$NVMF_PORT", 00:29:30.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:30.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:30.096 "hdgst": ${hdgst:-false}, 00:29:30.096 "ddgst": ${ddgst:-false} 00:29:30.096 }, 00:29:30.096 "method": "bdev_nvme_attach_controller" 00:29:30.097 } 00:29:30.097 EOF 00:29:30.097 )") 00:29:30.097 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:29:30.097 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:30.097 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:30.097 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:29:30.097 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:29:30.097 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:30.097 "params": { 00:29:30.097 "name": "Nvme0", 00:29:30.097 "trtype": "tcp", 00:29:30.097 "traddr": "10.0.0.3", 00:29:30.097 "adrfam": "ipv4", 00:29:30.097 "trsvcid": "4420", 00:29:30.097 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:30.097 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:30.097 "hdgst": false, 00:29:30.097 "ddgst": false 00:29:30.097 }, 00:29:30.097 "method": "bdev_nvme_attach_controller" 00:29:30.097 },{ 00:29:30.097 "params": { 00:29:30.097 "name": "Nvme1", 00:29:30.097 "trtype": "tcp", 00:29:30.097 "traddr": "10.0.0.3", 00:29:30.097 "adrfam": "ipv4", 00:29:30.097 "trsvcid": "4420", 00:29:30.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:30.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:30.097 "hdgst": false, 00:29:30.097 "ddgst": false 00:29:30.097 }, 00:29:30.097 "method": "bdev_nvme_attach_controller" 00:29:30.097 }' 00:29:30.097 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:30.097 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:30.097 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:29:30.097 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:30.097 11:32:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:30.355 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:30.355 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:30.355 fio-3.35 00:29:30.356 Starting 2 threads 00:29:42.580 00:29:42.580 filename0: (groupid=0, jobs=1): err= 0: pid=90402: Tue Dec 10 11:32:47 2024 00:29:42.580 read: IOPS=3760, BW=14.7MiB/s (15.4MB/s)(147MiB/10001msec) 00:29:42.580 slat (usec): min=5, max=116, avg=16.27, stdev= 3.74 00:29:42.580 clat (usec): min=619, max=7650, avg=1018.82, stdev=84.43 00:29:42.580 lat (usec): min=629, max=7678, avg=1035.09, stdev=85.17 00:29:42.580 clat percentiles (usec): 00:29:42.580 | 1.00th=[ 898], 5.00th=[ 930], 10.00th=[ 963], 20.00th=[ 988], 00:29:42.580 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1020], 60.00th=[ 1029], 00:29:42.580 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1090], 00:29:42.580 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1221], 99.95th=[ 1303], 00:29:42.580 | 99.99th=[ 7635] 00:29:42.580 bw ( KiB/s): min=14784, max=15200, per=50.06%, avg=15060.21, stdev=114.99, samples=19 00:29:42.580 iops : min= 3696, max= 3800, avg=3765.05, stdev=28.75, samples=19 00:29:42.580 lat (usec) : 750=0.01%, 1000=32.05% 00:29:42.580 lat (msec) : 2=67.93%, 10=0.01% 00:29:42.580 cpu : usr=89.49%, sys=9.04%, ctx=46, majf=0, minf=1074 00:29:42.580 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:42.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.580 issued rwts: total=37604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.580 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:42.580 filename1: (groupid=0, jobs=1): err= 0: pid=90403: Tue Dec 10 11:32:47 2024 00:29:42.580 read: IOPS=3760, BW=14.7MiB/s (15.4MB/s)(147MiB/10001msec) 00:29:42.580 slat (nsec): min=5442, max=88784, avg=16274.90, stdev=3761.43 00:29:42.580 clat (usec): min=574, max=7153, avg=1018.35, stdev=73.92 00:29:42.580 lat (usec): min=583, max=7194, avg=1034.63, stdev=74.29 00:29:42.580 clat percentiles (usec): 00:29:42.580 | 1.00th=[ 930], 5.00th=[ 963], 10.00th=[ 971], 20.00th=[ 988], 00:29:42.580 | 30.00th=[ 996], 40.00th=[ 1004], 50.00th=[ 1012], 60.00th=[ 1020], 00:29:42.580 | 70.00th=[ 1037], 80.00th=[ 1045], 90.00th=[ 1057], 95.00th=[ 1090], 00:29:42.580 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[ 1188], 99.95th=[ 1205], 00:29:42.580 | 99.99th=[ 7111] 00:29:42.580 bw ( KiB/s): min=14784, max=15200, per=50.07%, avg=15063.58, stdev=117.26, samples=19 00:29:42.580 iops : min= 3696, max= 3800, avg=3765.89, stdev=29.31, samples=19 00:29:42.580 lat (usec) : 750=0.02%, 1000=32.11% 00:29:42.580 lat (msec) : 2=67.86%, 10=0.01% 00:29:42.580 cpu : usr=90.12%, sys=8.48%, ctx=24, majf=0, minf=1074 00:29:42.580 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:42.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.580 issued rwts: total=37612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.580 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:42.580 00:29:42.580 Run status group 0 (all jobs): 00:29:42.580 READ: bw=29.4MiB/s (30.8MB/s), 14.7MiB/s-14.7MiB/s (15.4MB/s-15.4MB/s), io=294MiB (308MB), run=10001-10001msec 00:29:42.580 ----------------------------------------------------- 00:29:42.580 Suppressions used: 00:29:42.580 count bytes template 00:29:42.580 2 16 /usr/src/fio/parse.c 00:29:42.580 1 8 libtcmalloc_minimal.so 00:29:42.580 1 904 libcrypto.so 00:29:42.580 ----------------------------------------------------- 00:29:42.580 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.580 00:29:42.580 real 0m12.593s 00:29:42.580 user 0m20.084s 00:29:42.580 sys 0m2.132s 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.580 ************************************ 00:29:42.580 END TEST fio_dif_1_multi_subsystems 00:29:42.580 11:32:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:42.580 ************************************ 00:29:42.580 11:32:49 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:42.580 11:32:49 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:42.580 11:32:49 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.580 11:32:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:42.580 ************************************ 00:29:42.580 START TEST fio_dif_rand_params 00:29:42.581 ************************************ 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.581 bdev_null0 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:42.581 [2024-12-10 11:32:49.365170] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.581 { 00:29:42.581 "params": { 00:29:42.581 "name": "Nvme$subsystem", 00:29:42.581 "trtype": "$TEST_TRANSPORT", 00:29:42.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.581 "adrfam": "ipv4", 00:29:42.581 "trsvcid": "$NVMF_PORT", 00:29:42.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.581 "hdgst": ${hdgst:-false}, 00:29:42.581 "ddgst": ${ddgst:-false} 00:29:42.581 }, 00:29:42.581 "method": "bdev_nvme_attach_controller" 00:29:42.581 } 00:29:42.581 EOF 00:29:42.581 )") 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:42.581 "params": { 00:29:42.581 "name": "Nvme0", 00:29:42.581 "trtype": "tcp", 00:29:42.581 "traddr": "10.0.0.3", 00:29:42.581 "adrfam": "ipv4", 00:29:42.581 "trsvcid": "4420", 00:29:42.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.581 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:42.581 "hdgst": false, 00:29:42.581 "ddgst": false 00:29:42.581 }, 00:29:42.581 "method": "bdev_nvme_attach_controller" 00:29:42.581 }' 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:42.581 11:32:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:42.839 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:42.839 ... 00:29:42.839 fio-3.35 00:29:42.839 Starting 3 threads 00:29:49.395 00:29:49.395 filename0: (groupid=0, jobs=1): err= 0: pid=90564: Tue Dec 10 11:32:55 2024 00:29:49.395 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(125MiB/5002msec) 00:29:49.395 slat (usec): min=5, max=160, avg=22.44, stdev=10.47 00:29:49.395 clat (usec): min=14132, max=19167, avg=15011.00, stdev=428.23 00:29:49.395 lat (usec): min=14151, max=19190, avg=15033.44, stdev=429.45 00:29:49.395 clat percentiles (usec): 00:29:49.395 | 1.00th=[14222], 5.00th=[14615], 10.00th=[14615], 20.00th=[14746], 00:29:49.395 | 30.00th=[14746], 40.00th=[14877], 50.00th=[14877], 60.00th=[15008], 00:29:49.395 | 70.00th=[15139], 80.00th=[15270], 90.00th=[15533], 95.00th=[15664], 00:29:49.395 | 99.00th=[16188], 99.50th=[16581], 99.90th=[19268], 99.95th=[19268], 00:29:49.395 | 99.99th=[19268] 00:29:49.395 bw ( KiB/s): min=24576, max=26112, per=33.41%, avg=25514.67, stdev=640.00, samples=9 00:29:49.395 iops : min= 192, max= 204, avg=199.33, stdev= 5.00, samples=9 00:29:49.395 lat (msec) : 20=100.00% 00:29:49.395 cpu : usr=92.48%, sys=6.84%, ctx=8, majf=0, minf=1072 00:29:49.395 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:49.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:49.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:49.395 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:49.395 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:49.395 filename0: (groupid=0, jobs=1): err= 0: pid=90565: Tue Dec 10 11:32:55 2024 00:29:49.395 read: IOPS=198, BW=24.9MiB/s (26.1MB/s)(125MiB/5006msec) 00:29:49.395 slat (usec): min=4, max=159, avg=22.64, stdev=10.32 00:29:49.395 clat (usec): min=14166, max=23439, avg=15022.86, stdev=587.79 00:29:49.395 lat (usec): min=14185, max=23466, avg=15045.50, stdev=588.43 00:29:49.395 clat percentiles (usec): 00:29:49.395 | 1.00th=[14222], 5.00th=[14615], 10.00th=[14615], 20.00th=[14746], 00:29:49.395 | 30.00th=[14746], 40.00th=[14877], 50.00th=[14877], 60.00th=[15008], 00:29:49.395 | 70.00th=[15139], 80.00th=[15270], 90.00th=[15533], 95.00th=[15664], 00:29:49.395 | 99.00th=[16188], 99.50th=[16581], 99.90th=[23462], 99.95th=[23462], 00:29:49.395 | 99.99th=[23462] 00:29:49.395 bw ( KiB/s): min=24576, max=26112, per=33.29%, avg=25425.70, stdev=558.72, samples=10 00:29:49.395 iops : min= 192, max= 204, avg=198.60, stdev= 4.43, samples=10 00:29:49.395 lat (msec) : 20=99.70%, 50=0.30% 00:29:49.395 cpu : usr=92.03%, sys=7.25%, ctx=11, majf=0, minf=1075 00:29:49.395 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:49.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:49.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:49.395 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:49.395 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:49.395 filename0: (groupid=0, jobs=1): err= 0: pid=90566: Tue Dec 10 11:32:55 2024 00:29:49.396 read: IOPS=198, BW=24.9MiB/s (26.1MB/s)(125MiB/5008msec) 00:29:49.396 slat (usec): min=4, max=159, avg=22.32, stdev=10.86 00:29:49.396 clat (usec): min=14168, max=25091, avg=15027.56, stdev=666.00 00:29:49.396 lat (usec): min=14183, max=25120, avg=15049.88, stdev=666.68 00:29:49.396 clat percentiles (usec): 00:29:49.396 | 1.00th=[14222], 5.00th=[14615], 10.00th=[14615], 20.00th=[14746], 00:29:49.396 | 30.00th=[14746], 40.00th=[14877], 50.00th=[14877], 60.00th=[15008], 00:29:49.396 | 70.00th=[15139], 80.00th=[15270], 90.00th=[15533], 95.00th=[15664], 00:29:49.396 | 99.00th=[16319], 99.50th=[16581], 99.90th=[25035], 99.95th=[25035], 00:29:49.396 | 99.99th=[25035] 00:29:49.396 bw ( KiB/s): min=24576, max=26112, per=33.29%, avg=25420.80, stdev=566.68, samples=10 00:29:49.396 iops : min= 192, max= 204, avg=198.60, stdev= 4.43, samples=10 00:29:49.396 lat (msec) : 20=99.70%, 50=0.30% 00:29:49.396 cpu : usr=91.17%, sys=8.01%, ctx=100, majf=0, minf=1074 00:29:49.396 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:49.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:49.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:49.396 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:49.396 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:49.396 00:29:49.396 Run status group 0 (all jobs): 00:29:49.396 READ: bw=74.6MiB/s (78.2MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=374MiB (392MB), run=5002-5008msec 00:29:50.332 ----------------------------------------------------- 00:29:50.332 Suppressions used: 00:29:50.332 count bytes template 00:29:50.332 5 44 /usr/src/fio/parse.c 00:29:50.332 1 8 libtcmalloc_minimal.so 00:29:50.332 1 904 libcrypto.so 00:29:50.332 ----------------------------------------------------- 00:29:50.332 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.332 bdev_null0 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.332 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.333 [2024-12-10 11:32:56.878513] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.333 bdev_null1 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.333 bdev_null2 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:50.333 { 00:29:50.333 "params": { 00:29:50.333 "name": "Nvme$subsystem", 00:29:50.333 "trtype": "$TEST_TRANSPORT", 00:29:50.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.333 "adrfam": "ipv4", 00:29:50.333 "trsvcid": "$NVMF_PORT", 00:29:50.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.333 "hdgst": ${hdgst:-false}, 00:29:50.333 "ddgst": ${ddgst:-false} 00:29:50.333 }, 00:29:50.333 "method": "bdev_nvme_attach_controller" 00:29:50.333 } 00:29:50.333 EOF 00:29:50.333 )") 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:50.333 { 00:29:50.333 "params": { 00:29:50.333 "name": "Nvme$subsystem", 00:29:50.333 "trtype": "$TEST_TRANSPORT", 00:29:50.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.333 "adrfam": "ipv4", 00:29:50.333 "trsvcid": "$NVMF_PORT", 00:29:50.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.333 "hdgst": ${hdgst:-false}, 00:29:50.333 "ddgst": ${ddgst:-false} 00:29:50.333 }, 00:29:50.333 "method": "bdev_nvme_attach_controller" 00:29:50.333 } 00:29:50.333 EOF 00:29:50.333 )") 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:50.333 { 00:29:50.333 "params": { 00:29:50.333 "name": "Nvme$subsystem", 00:29:50.333 "trtype": "$TEST_TRANSPORT", 00:29:50.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.333 "adrfam": "ipv4", 00:29:50.333 "trsvcid": "$NVMF_PORT", 00:29:50.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.333 "hdgst": ${hdgst:-false}, 00:29:50.333 "ddgst": ${ddgst:-false} 00:29:50.333 }, 00:29:50.333 "method": "bdev_nvme_attach_controller" 00:29:50.333 } 00:29:50.333 EOF 00:29:50.333 )") 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:29:50.333 11:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:50.333 "params": { 00:29:50.333 "name": "Nvme0", 00:29:50.333 "trtype": "tcp", 00:29:50.333 "traddr": "10.0.0.3", 00:29:50.333 "adrfam": "ipv4", 00:29:50.333 "trsvcid": "4420", 00:29:50.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:50.333 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:50.333 "hdgst": false, 00:29:50.333 "ddgst": false 00:29:50.333 }, 00:29:50.333 "method": "bdev_nvme_attach_controller" 00:29:50.333 },{ 00:29:50.333 "params": { 00:29:50.333 "name": "Nvme1", 00:29:50.333 "trtype": "tcp", 00:29:50.333 "traddr": "10.0.0.3", 00:29:50.333 "adrfam": "ipv4", 00:29:50.333 "trsvcid": "4420", 00:29:50.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:50.333 "hdgst": false, 00:29:50.333 "ddgst": false 00:29:50.333 }, 00:29:50.333 "method": "bdev_nvme_attach_controller" 00:29:50.333 },{ 00:29:50.333 "params": { 00:29:50.334 "name": "Nvme2", 00:29:50.334 "trtype": "tcp", 00:29:50.334 "traddr": "10.0.0.3", 00:29:50.334 "adrfam": "ipv4", 00:29:50.334 "trsvcid": "4420", 00:29:50.334 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:50.334 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:50.334 "hdgst": false, 00:29:50.334 "ddgst": false 00:29:50.334 }, 00:29:50.334 "method": "bdev_nvme_attach_controller" 00:29:50.334 }' 00:29:50.334 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:50.334 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:50.334 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:29:50.334 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:50.334 11:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:50.592 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:50.592 ... 00:29:50.592 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:50.592 ... 00:29:50.592 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:50.592 ... 00:29:50.592 fio-3.35 00:29:50.592 Starting 24 threads 00:30:02.814 00:30:02.814 filename0: (groupid=0, jobs=1): err= 0: pid=90665: Tue Dec 10 11:33:08 2024 00:30:02.814 read: IOPS=184, BW=737KiB/s (755kB/s)(7420KiB/10066msec) 00:30:02.814 slat (usec): min=5, max=8038, avg=35.21, stdev=372.07 00:30:02.814 clat (msec): min=21, max=179, avg=86.60, stdev=28.00 00:30:02.814 lat (msec): min=21, max=179, avg=86.63, stdev=27.99 00:30:02.814 clat percentiles (msec): 00:30:02.814 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 56], 20.00th=[ 62], 00:30:02.814 | 30.00th=[ 70], 40.00th=[ 82], 50.00th=[ 88], 60.00th=[ 95], 00:30:02.814 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 127], 95.00th=[ 144], 00:30:02.814 | 99.00th=[ 157], 99.50th=[ 159], 99.90th=[ 167], 99.95th=[ 180], 00:30:02.814 | 99.99th=[ 180] 00:30:02.814 bw ( KiB/s): min= 488, max= 1240, per=4.25%, avg=735.45, stdev=160.64, samples=20 00:30:02.814 iops : min= 122, max= 310, avg=183.85, stdev=40.16, samples=20 00:30:02.814 lat (msec) : 50=9.33%, 100=66.52%, 250=24.15% 00:30:02.814 cpu : usr=34.41%, sys=2.07%, ctx=1100, majf=0, minf=1075 00:30:02.814 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.6%, 16=16.1%, 32=0.0%, >=64=0.0% 00:30:02.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.814 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.814 issued rwts: total=1855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.814 filename0: (groupid=0, jobs=1): err= 0: pid=90666: Tue Dec 10 11:33:08 2024 00:30:02.814 read: IOPS=184, BW=739KiB/s (757kB/s)(7408KiB/10021msec) 00:30:02.814 slat (usec): min=5, max=8035, avg=27.89, stdev=222.61 00:30:02.814 clat (msec): min=24, max=165, avg=86.41, stdev=26.35 00:30:02.814 lat (msec): min=24, max=165, avg=86.44, stdev=26.35 00:30:02.814 clat percentiles (msec): 00:30:02.814 | 1.00th=[ 32], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 64], 00:30:02.814 | 30.00th=[ 69], 40.00th=[ 78], 50.00th=[ 88], 60.00th=[ 94], 00:30:02.814 | 70.00th=[ 97], 80.00th=[ 104], 90.00th=[ 122], 95.00th=[ 144], 00:30:02.814 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 165], 00:30:02.814 | 99.99th=[ 165] 00:30:02.814 bw ( KiB/s): min= 512, max= 1024, per=4.26%, avg=736.55, stdev=118.24, samples=20 00:30:02.814 iops : min= 128, max= 256, avg=184.10, stdev=29.62, samples=20 00:30:02.814 lat (msec) : 50=5.99%, 100=69.49%, 250=24.51% 00:30:02.814 cpu : usr=41.16%, sys=2.27%, ctx=1278, majf=0, minf=1075 00:30:02.814 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:30:02.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.814 complete : 0=0.0%, 4=86.9%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.814 issued rwts: total=1852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.814 filename0: (groupid=0, jobs=1): err= 0: pid=90667: Tue Dec 10 11:33:08 2024 00:30:02.814 read: IOPS=186, BW=748KiB/s (766kB/s)(7544KiB/10088msec) 00:30:02.814 slat (usec): min=5, max=8055, avg=29.81, stdev=262.62 00:30:02.814 clat (msec): min=2, max=194, avg=85.23, stdev=33.37 00:30:02.814 lat (msec): min=2, max=194, avg=85.26, stdev=33.37 00:30:02.814 clat percentiles (msec): 00:30:02.814 | 1.00th=[ 6], 5.00th=[ 23], 10.00th=[ 43], 20.00th=[ 61], 00:30:02.814 | 30.00th=[ 68], 40.00th=[ 84], 50.00th=[ 92], 60.00th=[ 96], 00:30:02.814 | 70.00th=[ 99], 80.00th=[ 106], 90.00th=[ 132], 95.00th=[ 144], 00:30:02.814 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 178], 99.95th=[ 194], 00:30:02.814 | 99.99th=[ 194] 00:30:02.814 bw ( KiB/s): min= 432, max= 1920, per=4.32%, avg=747.90, stdev=298.11, samples=20 00:30:02.814 iops : min= 108, max= 480, avg=186.95, stdev=74.53, samples=20 00:30:02.814 lat (msec) : 4=0.85%, 10=1.96%, 20=2.17%, 50=7.85%, 100=59.76% 00:30:02.814 lat (msec) : 250=27.41% 00:30:02.814 cpu : usr=38.98%, sys=2.50%, ctx=1316, majf=0, minf=1075 00:30:02.814 IO depths : 1=0.3%, 2=1.2%, 4=4.1%, 8=78.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:30:02.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.814 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.814 issued rwts: total=1886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.814 filename0: (groupid=0, jobs=1): err= 0: pid=90668: Tue Dec 10 11:33:08 2024 00:30:02.814 read: IOPS=184, BW=737KiB/s (755kB/s)(7424KiB/10073msec) 00:30:02.814 slat (usec): min=8, max=10034, avg=37.44, stdev=353.67 00:30:02.814 clat (msec): min=12, max=177, avg=86.47, stdev=27.92 00:30:02.814 lat (msec): min=12, max=177, avg=86.51, stdev=27.92 00:30:02.814 clat percentiles (msec): 00:30:02.814 | 1.00th=[ 29], 5.00th=[ 46], 10.00th=[ 55], 20.00th=[ 63], 00:30:02.814 | 30.00th=[ 68], 40.00th=[ 82], 50.00th=[ 89], 60.00th=[ 94], 00:30:02.814 | 70.00th=[ 97], 80.00th=[ 105], 90.00th=[ 126], 95.00th=[ 144], 00:30:02.814 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 161], 99.95th=[ 178], 00:30:02.814 | 99.99th=[ 178] 00:30:02.814 bw ( KiB/s): min= 512, max= 1203, per=4.26%, avg=737.95, stdev=162.28, samples=20 00:30:02.814 iops : min= 128, max= 300, avg=184.35, stdev=40.47, samples=20 00:30:02.814 lat (msec) : 20=0.11%, 50=8.73%, 100=66.65%, 250=24.52% 00:30:02.814 cpu : usr=40.16%, sys=2.51%, ctx=1335, majf=0, minf=1075 00:30:02.814 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:30:02.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.814 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.814 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.814 filename0: (groupid=0, jobs=1): err= 0: pid=90669: Tue Dec 10 11:33:08 2024 00:30:02.814 read: IOPS=181, BW=727KiB/s (745kB/s)(7320KiB/10063msec) 00:30:02.814 slat (usec): min=8, max=8038, avg=26.27, stdev=221.35 00:30:02.814 clat (msec): min=19, max=176, avg=87.69, stdev=28.34 00:30:02.814 lat (msec): min=19, max=176, avg=87.71, stdev=28.33 00:30:02.814 clat percentiles (msec): 00:30:02.814 | 1.00th=[ 29], 5.00th=[ 40], 10.00th=[ 52], 20.00th=[ 62], 00:30:02.814 | 30.00th=[ 70], 40.00th=[ 85], 50.00th=[ 91], 60.00th=[ 96], 00:30:02.814 | 70.00th=[ 99], 80.00th=[ 106], 90.00th=[ 129], 95.00th=[ 144], 00:30:02.814 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 163], 99.95th=[ 178], 00:30:02.814 | 99.99th=[ 178] 00:30:02.814 bw ( KiB/s): min= 488, max= 1312, per=4.21%, avg=727.85, stdev=176.41, samples=20 00:30:02.814 iops : min= 122, max= 328, avg=181.95, stdev=44.11, samples=20 00:30:02.814 lat (msec) : 20=0.11%, 50=9.56%, 100=64.54%, 250=25.79% 00:30:02.814 cpu : usr=37.73%, sys=2.20%, ctx=1256, majf=0, minf=1072 00:30:02.814 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.3%, 16=16.3%, 32=0.0%, >=64=0.0% 00:30:02.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 issued rwts: total=1830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.815 filename0: (groupid=0, jobs=1): err= 0: pid=90670: Tue Dec 10 11:33:08 2024 00:30:02.815 read: IOPS=186, BW=747KiB/s (765kB/s)(7508KiB/10050msec) 00:30:02.815 slat (usec): min=6, max=8051, avg=27.99, stdev=261.99 00:30:02.815 clat (msec): min=21, max=167, avg=85.51, stdev=27.44 00:30:02.815 lat (msec): min=21, max=167, avg=85.54, stdev=27.44 00:30:02.815 clat percentiles (msec): 00:30:02.815 | 1.00th=[ 26], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 61], 00:30:02.815 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 86], 60.00th=[ 95], 00:30:02.815 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 121], 95.00th=[ 144], 00:30:02.815 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:30:02.815 | 99.99th=[ 169] 00:30:02.815 bw ( KiB/s): min= 512, max= 1112, per=4.30%, avg=743.50, stdev=145.80, samples=20 00:30:02.815 iops : min= 128, max= 278, avg=185.85, stdev=36.45, samples=20 00:30:02.815 lat (msec) : 50=8.20%, 100=71.02%, 250=20.78% 00:30:02.815 cpu : usr=32.13%, sys=1.85%, ctx=877, majf=0, minf=1074 00:30:02.815 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:30:02.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 issued rwts: total=1877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.815 filename0: (groupid=0, jobs=1): err= 0: pid=90671: Tue Dec 10 11:33:08 2024 00:30:02.815 read: IOPS=189, BW=757KiB/s (775kB/s)(7572KiB/10007msec) 00:30:02.815 slat (usec): min=5, max=4048, avg=20.95, stdev=92.83 00:30:02.815 clat (msec): min=2, max=160, avg=84.47, stdev=28.16 00:30:02.815 lat (msec): min=2, max=160, avg=84.49, stdev=28.16 00:30:02.815 clat percentiles (msec): 00:30:02.815 | 1.00th=[ 14], 5.00th=[ 41], 10.00th=[ 55], 20.00th=[ 62], 00:30:02.815 | 30.00th=[ 67], 40.00th=[ 73], 50.00th=[ 87], 60.00th=[ 94], 00:30:02.815 | 70.00th=[ 97], 80.00th=[ 103], 90.00th=[ 121], 95.00th=[ 144], 00:30:02.815 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 161], 99.95th=[ 161], 00:30:02.815 | 99.99th=[ 161] 00:30:02.815 bw ( KiB/s): min= 512, max= 992, per=4.30%, avg=744.21, stdev=120.00, samples=19 00:30:02.815 iops : min= 128, max= 248, avg=186.00, stdev=29.95, samples=19 00:30:02.815 lat (msec) : 4=0.21%, 10=0.48%, 20=0.48%, 50=6.60%, 100=69.26% 00:30:02.815 lat (msec) : 250=22.98% 00:30:02.815 cpu : usr=40.86%, sys=2.16%, ctx=1338, majf=0, minf=1074 00:30:02.815 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:30:02.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 issued rwts: total=1893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.815 filename0: (groupid=0, jobs=1): err= 0: pid=90672: Tue Dec 10 11:33:08 2024 00:30:02.815 read: IOPS=183, BW=732KiB/s (750kB/s)(7340KiB/10021msec) 00:30:02.815 slat (usec): min=5, max=8046, avg=30.20, stdev=324.28 00:30:02.815 clat (msec): min=23, max=158, avg=87.17, stdev=26.46 00:30:02.815 lat (msec): min=23, max=159, avg=87.20, stdev=26.46 00:30:02.815 clat percentiles (msec): 00:30:02.815 | 1.00th=[ 27], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 61], 00:30:02.815 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 89], 60.00th=[ 96], 00:30:02.815 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 121], 95.00th=[ 144], 00:30:02.815 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 159], 99.95th=[ 159], 00:30:02.815 | 99.99th=[ 159] 00:30:02.815 bw ( KiB/s): min= 512, max= 1048, per=4.22%, avg=730.15, stdev=116.22, samples=20 00:30:02.815 iops : min= 128, max= 262, avg=182.50, stdev=29.11, samples=20 00:30:02.815 lat (msec) : 50=6.70%, 100=69.92%, 250=23.38% 00:30:02.815 cpu : usr=32.56%, sys=1.96%, ctx=946, majf=0, minf=1071 00:30:02.815 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:30:02.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 complete : 0=0.0%, 4=87.2%, 8=12.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 issued rwts: total=1835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.815 filename1: (groupid=0, jobs=1): err= 0: pid=90673: Tue Dec 10 11:33:08 2024 00:30:02.815 read: IOPS=175, BW=701KiB/s (718kB/s)(7016KiB/10009msec) 00:30:02.815 slat (usec): min=5, max=8034, avg=27.17, stdev=270.59 00:30:02.815 clat (msec): min=13, max=161, avg=91.13, stdev=26.70 00:30:02.815 lat (msec): min=13, max=161, avg=91.16, stdev=26.71 00:30:02.815 clat percentiles (msec): 00:30:02.815 | 1.00th=[ 31], 5.00th=[ 49], 10.00th=[ 61], 20.00th=[ 65], 00:30:02.815 | 30.00th=[ 73], 40.00th=[ 87], 50.00th=[ 95], 60.00th=[ 96], 00:30:02.815 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 131], 95.00th=[ 144], 00:30:02.815 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 161], 99.95th=[ 161], 00:30:02.815 | 99.99th=[ 161] 00:30:02.815 bw ( KiB/s): min= 512, max= 936, per=4.00%, avg=691.05, stdev=97.49, samples=19 00:30:02.815 iops : min= 128, max= 234, avg=172.74, stdev=24.37, samples=19 00:30:02.815 lat (msec) : 20=0.51%, 50=4.73%, 100=65.68%, 250=29.08% 00:30:02.815 cpu : usr=35.88%, sys=1.95%, ctx=1004, majf=0, minf=1072 00:30:02.815 IO depths : 1=0.1%, 2=1.9%, 4=7.5%, 8=75.8%, 16=14.7%, 32=0.0%, >=64=0.0% 00:30:02.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 complete : 0=0.0%, 4=88.9%, 8=9.5%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 issued rwts: total=1754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.815 filename1: (groupid=0, jobs=1): err= 0: pid=90674: Tue Dec 10 11:33:08 2024 00:30:02.815 read: IOPS=180, BW=724KiB/s (741kB/s)(7264KiB/10036msec) 00:30:02.815 slat (usec): min=10, max=9045, avg=32.04, stdev=339.93 00:30:02.815 clat (msec): min=25, max=172, avg=88.23, stdev=27.65 00:30:02.815 lat (msec): min=25, max=172, avg=88.26, stdev=27.66 00:30:02.815 clat percentiles (msec): 00:30:02.815 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 57], 20.00th=[ 62], 00:30:02.815 | 30.00th=[ 71], 40.00th=[ 85], 50.00th=[ 92], 60.00th=[ 96], 00:30:02.815 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 128], 95.00th=[ 144], 00:30:02.815 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 159], 99.95th=[ 174], 00:30:02.815 | 99.99th=[ 174] 00:30:02.815 bw ( KiB/s): min= 488, max= 1024, per=4.16%, avg=719.95, stdev=129.50, samples=20 00:30:02.815 iops : min= 122, max= 256, avg=179.90, stdev=32.45, samples=20 00:30:02.815 lat (msec) : 50=7.38%, 100=66.57%, 250=26.05% 00:30:02.815 cpu : usr=32.48%, sys=1.92%, ctx=996, majf=0, minf=1073 00:30:02.815 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:30:02.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.815 filename1: (groupid=0, jobs=1): err= 0: pid=90675: Tue Dec 10 11:33:08 2024 00:30:02.815 read: IOPS=196, BW=786KiB/s (805kB/s)(7928KiB/10088msec) 00:30:02.815 slat (usec): min=7, max=13279, avg=38.16, stdev=407.58 00:30:02.815 clat (msec): min=2, max=161, avg=81.04, stdev=35.01 00:30:02.815 lat (msec): min=2, max=161, avg=81.07, stdev=35.01 00:30:02.815 clat percentiles (msec): 00:30:02.815 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 30], 20.00th=[ 61], 00:30:02.815 | 30.00th=[ 67], 40.00th=[ 80], 50.00th=[ 88], 60.00th=[ 94], 00:30:02.815 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 125], 95.00th=[ 144], 00:30:02.815 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 163], 99.95th=[ 163], 00:30:02.815 | 99.99th=[ 163] 00:30:02.815 bw ( KiB/s): min= 488, max= 2448, per=4.55%, avg=786.30, stdev=402.70, samples=20 00:30:02.815 iops : min= 122, max= 612, avg=196.55, stdev=100.68, samples=20 00:30:02.815 lat (msec) : 4=2.07%, 10=5.20%, 20=1.72%, 50=7.52%, 100=60.29% 00:30:02.815 lat (msec) : 250=23.21% 00:30:02.815 cpu : usr=43.36%, sys=2.83%, ctx=1233, majf=0, minf=1073 00:30:02.815 IO depths : 1=0.5%, 2=1.6%, 4=4.4%, 8=78.3%, 16=15.3%, 32=0.0%, >=64=0.0% 00:30:02.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 complete : 0=0.0%, 4=88.4%, 8=10.6%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 issued rwts: total=1982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.815 filename1: (groupid=0, jobs=1): err= 0: pid=90676: Tue Dec 10 11:33:08 2024 00:30:02.815 read: IOPS=180, BW=723KiB/s (740kB/s)(7296KiB/10097msec) 00:30:02.815 slat (usec): min=5, max=8037, avg=33.23, stdev=338.24 00:30:02.815 clat (msec): min=14, max=179, avg=88.24, stdev=29.47 00:30:02.815 lat (msec): min=14, max=179, avg=88.27, stdev=29.48 00:30:02.815 clat percentiles (msec): 00:30:02.815 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 50], 20.00th=[ 61], 00:30:02.815 | 30.00th=[ 72], 40.00th=[ 85], 50.00th=[ 94], 60.00th=[ 96], 00:30:02.815 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 132], 95.00th=[ 144], 00:30:02.815 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 159], 99.95th=[ 180], 00:30:02.815 | 99.99th=[ 180] 00:30:02.815 bw ( KiB/s): min= 488, max= 1392, per=4.18%, avg=723.05, stdev=191.18, samples=20 00:30:02.815 iops : min= 122, max= 348, avg=180.75, stdev=47.80, samples=20 00:30:02.815 lat (msec) : 20=0.77%, 50=9.48%, 100=64.31%, 250=25.44% 00:30:02.815 cpu : usr=33.12%, sys=1.86%, ctx=962, majf=0, minf=1074 00:30:02.815 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:30:02.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 complete : 0=0.0%, 4=88.3%, 8=10.9%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.815 issued rwts: total=1824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.815 filename1: (groupid=0, jobs=1): err= 0: pid=90677: Tue Dec 10 11:33:08 2024 00:30:02.815 read: IOPS=156, BW=628KiB/s (643kB/s)(6292KiB/10021msec) 00:30:02.815 slat (usec): min=5, max=8034, avg=29.62, stdev=303.21 00:30:02.815 clat (msec): min=24, max=205, avg=101.64, stdev=27.95 00:30:02.815 lat (msec): min=24, max=205, avg=101.67, stdev=27.95 00:30:02.815 clat percentiles (msec): 00:30:02.815 | 1.00th=[ 36], 5.00th=[ 50], 10.00th=[ 70], 20.00th=[ 87], 00:30:02.815 | 30.00th=[ 93], 40.00th=[ 96], 50.00th=[ 96], 60.00th=[ 100], 00:30:02.815 | 70.00th=[ 107], 80.00th=[ 123], 90.00th=[ 142], 95.00th=[ 153], 00:30:02.815 | 99.00th=[ 192], 99.50th=[ 192], 99.90th=[ 205], 99.95th=[ 205], 00:30:02.815 | 99.99th=[ 205] 00:30:02.815 bw ( KiB/s): min= 384, max= 1016, per=3.62%, avg=625.00, stdev=152.16, samples=20 00:30:02.815 iops : min= 96, max= 254, avg=156.20, stdev=38.04, samples=20 00:30:02.815 lat (msec) : 50=5.15%, 100=55.88%, 250=38.97% 00:30:02.815 cpu : usr=37.62%, sys=2.36%, ctx=1155, majf=0, minf=1074 00:30:02.816 IO depths : 1=0.1%, 2=5.1%, 4=20.5%, 8=61.2%, 16=13.1%, 32=0.0%, >=64=0.0% 00:30:02.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 complete : 0=0.0%, 4=93.0%, 8=2.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 issued rwts: total=1573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.816 filename1: (groupid=0, jobs=1): err= 0: pid=90678: Tue Dec 10 11:33:08 2024 00:30:02.816 read: IOPS=177, BW=710KiB/s (727kB/s)(7116KiB/10020msec) 00:30:02.816 slat (usec): min=5, max=12038, avg=28.52, stdev=301.57 00:30:02.816 clat (msec): min=24, max=160, avg=89.91, stdev=27.19 00:30:02.816 lat (msec): min=24, max=160, avg=89.94, stdev=27.19 00:30:02.816 clat percentiles (msec): 00:30:02.816 | 1.00th=[ 32], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 65], 00:30:02.816 | 30.00th=[ 71], 40.00th=[ 86], 50.00th=[ 93], 60.00th=[ 96], 00:30:02.816 | 70.00th=[ 102], 80.00th=[ 107], 90.00th=[ 129], 95.00th=[ 144], 00:30:02.816 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 161], 99.95th=[ 161], 00:30:02.816 | 99.99th=[ 161] 00:30:02.816 bw ( KiB/s): min= 512, max= 1072, per=4.09%, avg=707.45, stdev=143.67, samples=20 00:30:02.816 iops : min= 128, max= 268, avg=176.85, stdev=35.93, samples=20 00:30:02.816 lat (msec) : 50=6.41%, 100=61.78%, 250=31.82% 00:30:02.816 cpu : usr=39.75%, sys=2.32%, ctx=1262, majf=0, minf=1074 00:30:02.816 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=76.6%, 16=14.8%, 32=0.0%, >=64=0.0% 00:30:02.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 complete : 0=0.0%, 4=88.7%, 8=9.8%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 issued rwts: total=1779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.816 filename1: (groupid=0, jobs=1): err= 0: pid=90679: Tue Dec 10 11:33:08 2024 00:30:02.816 read: IOPS=181, BW=726KiB/s (744kB/s)(7308KiB/10060msec) 00:30:02.816 slat (usec): min=7, max=8034, avg=22.72, stdev=187.67 00:30:02.816 clat (msec): min=28, max=180, avg=87.94, stdev=28.04 00:30:02.816 lat (msec): min=28, max=180, avg=87.97, stdev=28.05 00:30:02.816 clat percentiles (msec): 00:30:02.816 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 57], 20.00th=[ 62], 00:30:02.816 | 30.00th=[ 72], 40.00th=[ 85], 50.00th=[ 92], 60.00th=[ 96], 00:30:02.816 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 132], 95.00th=[ 144], 00:30:02.816 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 182], 00:30:02.816 | 99.99th=[ 182] 00:30:02.816 bw ( KiB/s): min= 507, max= 1136, per=4.18%, avg=722.70, stdev=137.76, samples=20 00:30:02.816 iops : min= 126, max= 284, avg=180.60, stdev=34.51, samples=20 00:30:02.816 lat (msec) : 50=8.76%, 100=67.49%, 250=23.75% 00:30:02.816 cpu : usr=32.54%, sys=2.08%, ctx=1026, majf=0, minf=1072 00:30:02.816 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:30:02.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 issued rwts: total=1827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.816 filename1: (groupid=0, jobs=1): err= 0: pid=90680: Tue Dec 10 11:33:08 2024 00:30:02.816 read: IOPS=186, BW=746KiB/s (764kB/s)(7480KiB/10030msec) 00:30:02.816 slat (usec): min=5, max=8041, avg=31.69, stdev=293.34 00:30:02.816 clat (msec): min=22, max=164, avg=85.66, stdev=27.47 00:30:02.816 lat (msec): min=22, max=164, avg=85.69, stdev=27.48 00:30:02.816 clat percentiles (msec): 00:30:02.816 | 1.00th=[ 26], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 62], 00:30:02.816 | 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 87], 60.00th=[ 93], 00:30:02.816 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 122], 95.00th=[ 144], 00:30:02.816 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 165], 99.95th=[ 165], 00:30:02.816 | 99.99th=[ 165] 00:30:02.816 bw ( KiB/s): min= 488, max= 1120, per=4.29%, avg=741.90, stdev=141.05, samples=20 00:30:02.816 iops : min= 122, max= 280, avg=185.35, stdev=35.31, samples=20 00:30:02.816 lat (msec) : 50=7.65%, 100=68.29%, 250=24.06% 00:30:02.816 cpu : usr=36.20%, sys=2.30%, ctx=1028, majf=0, minf=1071 00:30:02.816 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:30:02.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 issued rwts: total=1870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.816 filename2: (groupid=0, jobs=1): err= 0: pid=90681: Tue Dec 10 11:33:08 2024 00:30:02.816 read: IOPS=161, BW=647KiB/s (663kB/s)(6484KiB/10017msec) 00:30:02.816 slat (usec): min=5, max=4034, avg=20.34, stdev=99.99 00:30:02.816 clat (msec): min=19, max=174, avg=98.70, stdev=30.41 00:30:02.816 lat (msec): min=19, max=174, avg=98.73, stdev=30.41 00:30:02.816 clat percentiles (msec): 00:30:02.816 | 1.00th=[ 28], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 71], 00:30:02.816 | 30.00th=[ 86], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 103], 00:30:02.816 | 70.00th=[ 112], 80.00th=[ 126], 90.00th=[ 144], 95.00th=[ 150], 00:30:02.816 | 99.00th=[ 165], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 176], 00:30:02.816 | 99.99th=[ 176] 00:30:02.816 bw ( KiB/s): min= 384, max= 1072, per=3.73%, avg=644.45, stdev=157.52, samples=20 00:30:02.816 iops : min= 96, max= 268, avg=161.10, stdev=39.39, samples=20 00:30:02.816 lat (msec) : 20=0.19%, 50=6.23%, 100=51.82%, 250=41.76% 00:30:02.816 cpu : usr=39.29%, sys=2.22%, ctx=1225, majf=0, minf=1074 00:30:02.816 IO depths : 1=0.1%, 2=3.8%, 4=15.0%, 8=67.4%, 16=13.8%, 32=0.0%, >=64=0.0% 00:30:02.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 complete : 0=0.0%, 4=91.2%, 8=5.5%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 issued rwts: total=1621,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.816 filename2: (groupid=0, jobs=1): err= 0: pid=90682: Tue Dec 10 11:33:08 2024 00:30:02.816 read: IOPS=174, BW=697KiB/s (714kB/s)(6992KiB/10033msec) 00:30:02.816 slat (usec): min=5, max=12035, avg=31.53, stdev=365.70 00:30:02.816 clat (msec): min=32, max=166, avg=91.63, stdev=27.65 00:30:02.816 lat (msec): min=32, max=166, avg=91.66, stdev=27.67 00:30:02.816 clat percentiles (msec): 00:30:02.816 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 64], 00:30:02.816 | 30.00th=[ 72], 40.00th=[ 86], 50.00th=[ 95], 60.00th=[ 96], 00:30:02.816 | 70.00th=[ 100], 80.00th=[ 114], 90.00th=[ 132], 95.00th=[ 144], 00:30:02.816 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 167], 99.95th=[ 167], 00:30:02.816 | 99.99th=[ 167] 00:30:02.816 bw ( KiB/s): min= 488, max= 992, per=4.01%, avg=693.10, stdev=129.95, samples=20 00:30:02.816 iops : min= 122, max= 248, avg=173.15, stdev=32.54, samples=20 00:30:02.816 lat (msec) : 50=6.12%, 100=64.82%, 250=29.06% 00:30:02.816 cpu : usr=32.37%, sys=1.81%, ctx=916, majf=0, minf=1074 00:30:02.816 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=77.0%, 16=15.0%, 32=0.0%, >=64=0.0% 00:30:02.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 complete : 0=0.0%, 4=88.7%, 8=9.9%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 issued rwts: total=1748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.816 filename2: (groupid=0, jobs=1): err= 0: pid=90683: Tue Dec 10 11:33:08 2024 00:30:02.816 read: IOPS=192, BW=768KiB/s (786kB/s)(7740KiB/10078msec) 00:30:02.816 slat (usec): min=5, max=4459, avg=22.67, stdev=117.91 00:30:02.816 clat (msec): min=3, max=180, avg=83.10, stdev=35.79 00:30:02.816 lat (msec): min=3, max=180, avg=83.12, stdev=35.78 00:30:02.816 clat percentiles (msec): 00:30:02.816 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 31], 20.00th=[ 61], 00:30:02.816 | 30.00th=[ 67], 40.00th=[ 81], 50.00th=[ 90], 60.00th=[ 95], 00:30:02.816 | 70.00th=[ 100], 80.00th=[ 106], 90.00th=[ 128], 95.00th=[ 146], 00:30:02.816 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 178], 99.95th=[ 180], 00:30:02.816 | 99.99th=[ 180] 00:30:02.816 bw ( KiB/s): min= 488, max= 2304, per=4.44%, avg=767.50, stdev=378.80, samples=20 00:30:02.816 iops : min= 122, max= 576, avg=191.85, stdev=94.70, samples=20 00:30:02.816 lat (msec) : 4=0.21%, 10=5.53%, 20=2.53%, 50=7.39%, 100=55.76% 00:30:02.816 lat (msec) : 250=28.58% 00:30:02.816 cpu : usr=41.03%, sys=2.47%, ctx=1329, majf=0, minf=1073 00:30:02.816 IO depths : 1=0.5%, 2=1.6%, 4=4.7%, 8=77.9%, 16=15.3%, 32=0.0%, >=64=0.0% 00:30:02.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 complete : 0=0.0%, 4=88.6%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 issued rwts: total=1935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.816 filename2: (groupid=0, jobs=1): err= 0: pid=90684: Tue Dec 10 11:33:08 2024 00:30:02.816 read: IOPS=191, BW=767KiB/s (786kB/s)(7684KiB/10013msec) 00:30:02.816 slat (usec): min=5, max=8036, avg=37.77, stdev=370.79 00:30:02.816 clat (usec): min=1825, max=164596, avg=83211.10, stdev=32138.40 00:30:02.816 lat (usec): min=1835, max=164612, avg=83248.88, stdev=32146.57 00:30:02.816 clat percentiles (usec): 00:30:02.816 | 1.00th=[ 1926], 5.00th=[ 14091], 10.00th=[ 47449], 20.00th=[ 60031], 00:30:02.816 | 30.00th=[ 66847], 40.00th=[ 71828], 50.00th=[ 87557], 60.00th=[ 94897], 00:30:02.816 | 70.00th=[ 96994], 80.00th=[105382], 90.00th=[120062], 95.00th=[141558], 00:30:02.816 | 99.00th=[152044], 99.50th=[156238], 99.90th=[164627], 99.95th=[164627], 00:30:02.816 | 99.99th=[164627] 00:30:02.816 bw ( KiB/s): min= 512, max= 1024, per=4.18%, avg=722.84, stdev=114.61, samples=19 00:30:02.816 iops : min= 128, max= 256, avg=180.68, stdev=28.66, samples=19 00:30:02.816 lat (msec) : 2=2.60%, 4=1.04%, 10=0.83%, 20=1.51%, 50=4.58% 00:30:02.816 lat (msec) : 100=64.34%, 250=25.09% 00:30:02.816 cpu : usr=37.14%, sys=2.31%, ctx=1097, majf=0, minf=1074 00:30:02.816 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:30:02.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 complete : 0=0.0%, 4=87.4%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.816 issued rwts: total=1921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.816 filename2: (groupid=0, jobs=1): err= 0: pid=90685: Tue Dec 10 11:33:08 2024 00:30:02.816 read: IOPS=185, BW=743KiB/s (761kB/s)(7480KiB/10064msec) 00:30:02.816 slat (usec): min=5, max=8040, avg=33.67, stdev=307.59 00:30:02.816 clat (msec): min=23, max=178, avg=85.96, stdev=27.89 00:30:02.816 lat (msec): min=23, max=178, avg=85.99, stdev=27.90 00:30:02.816 clat percentiles (msec): 00:30:02.816 | 1.00th=[ 24], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 61], 00:30:02.816 | 30.00th=[ 70], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 95], 00:30:02.816 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 123], 95.00th=[ 144], 00:30:02.816 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 163], 99.95th=[ 180], 00:30:02.816 | 99.99th=[ 180] 00:30:02.816 bw ( KiB/s): min= 512, max= 1192, per=4.29%, avg=741.60, stdev=145.70, samples=20 00:30:02.817 iops : min= 128, max= 298, avg=185.40, stdev=36.42, samples=20 00:30:02.817 lat (msec) : 50=9.41%, 100=66.52%, 250=24.06% 00:30:02.817 cpu : usr=38.00%, sys=2.17%, ctx=960, majf=0, minf=1075 00:30:02.817 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:30:02.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.817 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.817 issued rwts: total=1870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.817 filename2: (groupid=0, jobs=1): err= 0: pid=90686: Tue Dec 10 11:33:08 2024 00:30:02.817 read: IOPS=169, BW=676KiB/s (693kB/s)(6816KiB/10078msec) 00:30:02.817 slat (usec): min=9, max=8039, avg=39.62, stdev=413.19 00:30:02.817 clat (msec): min=21, max=180, avg=94.13, stdev=27.80 00:30:02.817 lat (msec): min=21, max=180, avg=94.17, stdev=27.80 00:30:02.817 clat percentiles (msec): 00:30:02.817 | 1.00th=[ 27], 5.00th=[ 44], 10.00th=[ 61], 20.00th=[ 71], 00:30:02.817 | 30.00th=[ 85], 40.00th=[ 95], 50.00th=[ 96], 60.00th=[ 96], 00:30:02.817 | 70.00th=[ 105], 80.00th=[ 114], 90.00th=[ 132], 95.00th=[ 144], 00:30:02.817 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 167], 99.95th=[ 182], 00:30:02.817 | 99.99th=[ 182] 00:30:02.817 bw ( KiB/s): min= 512, max= 1149, per=3.92%, avg=677.35, stdev=148.48, samples=20 00:30:02.817 iops : min= 128, max= 287, avg=169.30, stdev=37.08, samples=20 00:30:02.817 lat (msec) : 50=8.69%, 100=57.10%, 250=34.21% 00:30:02.817 cpu : usr=32.50%, sys=2.03%, ctx=1009, majf=0, minf=1071 00:30:02.817 IO depths : 1=0.1%, 2=2.4%, 4=9.7%, 8=72.7%, 16=15.1%, 32=0.0%, >=64=0.0% 00:30:02.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.817 complete : 0=0.0%, 4=90.1%, 8=7.7%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.817 issued rwts: total=1704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.817 filename2: (groupid=0, jobs=1): err= 0: pid=90687: Tue Dec 10 11:33:08 2024 00:30:02.817 read: IOPS=175, BW=701KiB/s (718kB/s)(7020KiB/10017msec) 00:30:02.817 slat (usec): min=5, max=8033, avg=33.25, stdev=288.83 00:30:02.817 clat (msec): min=19, max=176, avg=91.07, stdev=28.42 00:30:02.817 lat (msec): min=19, max=176, avg=91.11, stdev=28.41 00:30:02.817 clat percentiles (msec): 00:30:02.817 | 1.00th=[ 26], 5.00th=[ 45], 10.00th=[ 58], 20.00th=[ 65], 00:30:02.817 | 30.00th=[ 75], 40.00th=[ 88], 50.00th=[ 94], 60.00th=[ 96], 00:30:02.817 | 70.00th=[ 100], 80.00th=[ 109], 90.00th=[ 136], 95.00th=[ 144], 00:30:02.817 | 99.00th=[ 161], 99.50th=[ 161], 99.90th=[ 176], 99.95th=[ 178], 00:30:02.817 | 99.99th=[ 178] 00:30:02.817 bw ( KiB/s): min= 512, max= 1080, per=4.04%, avg=698.45, stdev=152.98, samples=20 00:30:02.817 iops : min= 128, max= 270, avg=174.60, stdev=38.26, samples=20 00:30:02.817 lat (msec) : 20=0.17%, 50=6.55%, 100=64.05%, 250=29.23% 00:30:02.817 cpu : usr=38.76%, sys=2.68%, ctx=1157, majf=0, minf=1072 00:30:02.817 IO depths : 1=0.1%, 2=2.3%, 4=9.1%, 8=74.1%, 16=14.5%, 32=0.0%, >=64=0.0% 00:30:02.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.817 complete : 0=0.0%, 4=89.3%, 8=8.7%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.817 issued rwts: total=1755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.817 filename2: (groupid=0, jobs=1): err= 0: pid=90688: Tue Dec 10 11:33:08 2024 00:30:02.817 read: IOPS=176, BW=707KiB/s (724kB/s)(7140KiB/10102msec) 00:30:02.817 slat (usec): min=4, max=8062, avg=36.48, stdev=352.68 00:30:02.817 clat (msec): min=9, max=190, avg=90.15, stdev=30.72 00:30:02.817 lat (msec): min=12, max=190, avg=90.19, stdev=30.71 00:30:02.817 clat percentiles (msec): 00:30:02.817 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 51], 20.00th=[ 62], 00:30:02.817 | 30.00th=[ 72], 40.00th=[ 85], 50.00th=[ 95], 60.00th=[ 96], 00:30:02.817 | 70.00th=[ 99], 80.00th=[ 117], 90.00th=[ 132], 95.00th=[ 144], 00:30:02.817 | 99.00th=[ 157], 99.50th=[ 159], 99.90th=[ 180], 99.95th=[ 190], 00:30:02.817 | 99.99th=[ 190] 00:30:02.817 bw ( KiB/s): min= 456, max= 1296, per=4.09%, avg=707.45, stdev=180.61, samples=20 00:30:02.817 iops : min= 114, max= 324, avg=176.85, stdev=45.16, samples=20 00:30:02.817 lat (msec) : 10=0.06%, 20=0.84%, 50=9.08%, 100=61.29%, 250=28.74% 00:30:02.817 cpu : usr=32.13%, sys=2.14%, ctx=893, majf=0, minf=1074 00:30:02.817 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=77.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:30:02.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.817 complete : 0=0.0%, 4=88.8%, 8=10.0%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.817 issued rwts: total=1785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:02.817 00:30:02.817 Run status group 0 (all jobs): 00:30:02.817 READ: bw=16.9MiB/s (17.7MB/s), 628KiB/s-786KiB/s (643kB/s-805kB/s), io=171MiB (179MB), run=10007-10102msec 00:30:03.385 ----------------------------------------------------- 00:30:03.385 Suppressions used: 00:30:03.385 count bytes template 00:30:03.385 45 402 /usr/src/fio/parse.c 00:30:03.385 1 8 libtcmalloc_minimal.so 00:30:03.385 1 904 libcrypto.so 00:30:03.385 ----------------------------------------------------- 00:30:03.385 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:03.385 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.386 11:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:03.386 bdev_null0 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:03.386 [2024-12-10 11:33:10.027471] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:03.386 bdev_null1 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:03.386 { 00:30:03.386 "params": { 00:30:03.386 "name": "Nvme$subsystem", 00:30:03.386 "trtype": "$TEST_TRANSPORT", 00:30:03.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.386 "adrfam": "ipv4", 00:30:03.386 "trsvcid": "$NVMF_PORT", 00:30:03.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.386 "hdgst": ${hdgst:-false}, 00:30:03.386 "ddgst": ${ddgst:-false} 00:30:03.386 }, 00:30:03.386 "method": "bdev_nvme_attach_controller" 00:30:03.386 } 00:30:03.386 EOF 00:30:03.386 )") 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:03.386 { 00:30:03.386 "params": { 00:30:03.386 "name": "Nvme$subsystem", 00:30:03.386 "trtype": "$TEST_TRANSPORT", 00:30:03.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.386 "adrfam": "ipv4", 00:30:03.386 "trsvcid": "$NVMF_PORT", 00:30:03.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.386 "hdgst": ${hdgst:-false}, 00:30:03.386 "ddgst": ${ddgst:-false} 00:30:03.386 }, 00:30:03.386 "method": "bdev_nvme_attach_controller" 00:30:03.386 } 00:30:03.386 EOF 00:30:03.386 )") 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:03.386 "params": { 00:30:03.386 "name": "Nvme0", 00:30:03.386 "trtype": "tcp", 00:30:03.386 "traddr": "10.0.0.3", 00:30:03.386 "adrfam": "ipv4", 00:30:03.386 "trsvcid": "4420", 00:30:03.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:03.386 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:03.386 "hdgst": false, 00:30:03.386 "ddgst": false 00:30:03.386 }, 00:30:03.386 "method": "bdev_nvme_attach_controller" 00:30:03.386 },{ 00:30:03.386 "params": { 00:30:03.386 "name": "Nvme1", 00:30:03.386 "trtype": "tcp", 00:30:03.386 "traddr": "10.0.0.3", 00:30:03.386 "adrfam": "ipv4", 00:30:03.386 "trsvcid": "4420", 00:30:03.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.386 "hdgst": false, 00:30:03.386 "ddgst": false 00:30:03.386 }, 00:30:03.386 "method": "bdev_nvme_attach_controller" 00:30:03.386 }' 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:03.386 11:33:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:03.645 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:03.645 ... 00:30:03.645 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:03.645 ... 00:30:03.645 fio-3.35 00:30:03.645 Starting 4 threads 00:30:10.205 00:30:10.205 filename0: (groupid=0, jobs=1): err= 0: pid=90828: Tue Dec 10 11:33:16 2024 00:30:10.205 read: IOPS=1446, BW=11.3MiB/s (11.9MB/s)(56.5MiB/5001msec) 00:30:10.205 slat (nsec): min=5432, max=59375, avg=17521.70, stdev=4106.83 00:30:10.205 clat (usec): min=1602, max=8979, avg=5460.31, stdev=570.41 00:30:10.205 lat (usec): min=1618, max=9020, avg=5477.83, stdev=570.40 00:30:10.205 clat percentiles (usec): 00:30:10.205 | 1.00th=[ 3261], 5.00th=[ 4817], 10.00th=[ 4883], 20.00th=[ 4948], 00:30:10.205 | 30.00th=[ 5211], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5669], 00:30:10.205 | 70.00th=[ 5669], 80.00th=[ 5735], 90.00th=[ 5800], 95.00th=[ 6128], 00:30:10.205 | 99.00th=[ 7046], 99.50th=[ 7242], 99.90th=[ 7635], 99.95th=[ 7701], 00:30:10.205 | 99.99th=[ 8979] 00:30:10.205 bw ( KiB/s): min=11136, max=12800, per=22.02%, avg=11450.67, stdev=531.86, samples=9 00:30:10.205 iops : min= 1392, max= 1600, avg=1431.33, stdev=66.48, samples=9 00:30:10.205 lat (msec) : 2=0.08%, 4=2.31%, 10=97.61% 00:30:10.205 cpu : usr=92.80%, sys=6.36%, ctx=55, majf=0, minf=1072 00:30:10.205 IO depths : 1=0.1%, 2=23.6%, 4=50.9%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:10.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.205 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.205 issued rwts: total=7236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.205 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:10.205 filename0: (groupid=0, jobs=1): err= 0: pid=90829: Tue Dec 10 11:33:16 2024 00:30:10.205 read: IOPS=1804, BW=14.1MiB/s (14.8MB/s)(70.6MiB/5005msec) 00:30:10.205 slat (usec): min=5, max=150, avg=17.45, stdev= 4.92 00:30:10.205 clat (usec): min=1216, max=13698, avg=4383.23, stdev=1177.63 00:30:10.205 lat (usec): min=1227, max=13739, avg=4400.68, stdev=1177.67 00:30:10.205 clat percentiles (usec): 00:30:10.205 | 1.00th=[ 2573], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 3064], 00:30:10.205 | 30.00th=[ 3163], 40.00th=[ 4178], 50.00th=[ 4883], 60.00th=[ 5014], 00:30:10.205 | 70.00th=[ 5276], 80.00th=[ 5473], 90.00th=[ 5604], 95.00th=[ 5669], 00:30:10.205 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 8455], 99.95th=[12649], 00:30:10.205 | 99.99th=[13698] 00:30:10.205 bw ( KiB/s): min=12800, max=15376, per=28.00%, avg=14558.56, stdev=877.66, samples=9 00:30:10.205 iops : min= 1600, max= 1922, avg=1819.78, stdev=109.72, samples=9 00:30:10.205 lat (msec) : 2=0.52%, 4=39.16%, 10=60.23%, 20=0.09% 00:30:10.205 cpu : usr=92.15%, sys=6.79%, ctx=6, majf=0, minf=1074 00:30:10.205 IO depths : 1=0.1%, 2=5.9%, 4=60.7%, 8=33.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:10.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.205 complete : 0=0.0%, 4=97.8%, 8=2.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.205 issued rwts: total=9034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.205 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:10.205 filename1: (groupid=0, jobs=1): err= 0: pid=90830: Tue Dec 10 11:33:16 2024 00:30:10.205 read: IOPS=1433, BW=11.2MiB/s (11.7MB/s)(56.0MiB/5001msec) 00:30:10.205 slat (nsec): min=4370, max=49467, avg=17830.60, stdev=4327.41 00:30:10.205 clat (usec): min=1144, max=8285, avg=5509.79, stdev=566.79 00:30:10.205 lat (usec): min=1155, max=8304, avg=5527.62, stdev=566.66 00:30:10.205 clat percentiles (usec): 00:30:10.205 | 1.00th=[ 3720], 5.00th=[ 4817], 10.00th=[ 4883], 20.00th=[ 4948], 00:30:10.205 | 30.00th=[ 5342], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5669], 00:30:10.205 | 70.00th=[ 5669], 80.00th=[ 5735], 90.00th=[ 5800], 95.00th=[ 6325], 00:30:10.205 | 99.00th=[ 7439], 99.50th=[ 7635], 99.90th=[ 8094], 99.95th=[ 8160], 00:30:10.205 | 99.99th=[ 8291] 00:30:10.205 bw ( KiB/s): min=10880, max=12800, per=21.91%, avg=11392.00, stdev=565.23, samples=9 00:30:10.205 iops : min= 1360, max= 1600, avg=1424.00, stdev=70.65, samples=9 00:30:10.205 lat (msec) : 2=0.17%, 4=1.09%, 10=98.74% 00:30:10.205 cpu : usr=92.22%, sys=6.84%, ctx=25, majf=0, minf=1075 00:30:10.205 IO depths : 1=0.1%, 2=24.4%, 4=50.4%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:10.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.205 complete : 0=0.0%, 4=90.2%, 8=9.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.205 issued rwts: total=7167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.205 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:10.205 filename1: (groupid=0, jobs=1): err= 0: pid=90831: Tue Dec 10 11:33:16 2024 00:30:10.205 read: IOPS=1818, BW=14.2MiB/s (14.9MB/s)(71.1MiB/5001msec) 00:30:10.205 slat (nsec): min=4467, max=76559, avg=17712.02, stdev=4945.94 00:30:10.205 clat (usec): min=1206, max=8616, avg=4349.30, stdev=1162.08 00:30:10.205 lat (usec): min=1219, max=8634, avg=4367.02, stdev=1161.09 00:30:10.205 clat percentiles (usec): 00:30:10.205 | 1.00th=[ 1778], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2933], 00:30:10.205 | 30.00th=[ 3163], 40.00th=[ 3916], 50.00th=[ 4883], 60.00th=[ 5014], 00:30:10.205 | 70.00th=[ 5211], 80.00th=[ 5473], 90.00th=[ 5604], 95.00th=[ 5669], 00:30:10.205 | 99.00th=[ 6128], 99.50th=[ 6259], 99.90th=[ 6521], 99.95th=[ 7111], 00:30:10.205 | 99.99th=[ 8586] 00:30:10.205 bw ( KiB/s): min=12800, max=15456, per=28.22%, avg=14675.56, stdev=941.21, samples=9 00:30:10.205 iops : min= 1600, max= 1932, avg=1834.44, stdev=117.65, samples=9 00:30:10.205 lat (msec) : 2=1.04%, 4=39.27%, 10=59.68% 00:30:10.205 cpu : usr=92.54%, sys=6.42%, ctx=14, majf=0, minf=1074 00:30:10.205 IO depths : 1=0.1%, 2=5.5%, 4=60.9%, 8=33.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:10.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.205 complete : 0=0.0%, 4=97.9%, 8=2.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.205 issued rwts: total=9095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.205 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:10.205 00:30:10.205 Run status group 0 (all jobs): 00:30:10.205 READ: bw=50.8MiB/s (53.2MB/s), 11.2MiB/s-14.2MiB/s (11.7MB/s-14.9MB/s), io=254MiB (267MB), run=5001-5005msec 00:30:10.772 ----------------------------------------------------- 00:30:10.772 Suppressions used: 00:30:10.772 count bytes template 00:30:10.772 6 52 /usr/src/fio/parse.c 00:30:10.772 1 8 libtcmalloc_minimal.so 00:30:10.772 1 904 libcrypto.so 00:30:10.772 ----------------------------------------------------- 00:30:10.772 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.772 11:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:11.031 ************************************ 00:30:11.031 END TEST fio_dif_rand_params 00:30:11.031 ************************************ 00:30:11.031 11:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.031 00:30:11.031 real 0m28.266s 00:30:11.031 user 2m8.008s 00:30:11.031 sys 0m8.961s 00:30:11.031 11:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.031 11:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:11.031 11:33:17 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:11.031 11:33:17 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:11.031 11:33:17 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.031 11:33:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:11.031 ************************************ 00:30:11.031 START TEST fio_dif_digest 00:30:11.031 ************************************ 00:30:11.031 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:30:11.031 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:11.031 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:11.031 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:11.031 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:11.031 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:11.031 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:11.031 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:11.031 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:11.032 bdev_null0 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:11.032 [2024-12-10 11:33:17.690602] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.032 { 00:30:11.032 "params": { 00:30:11.032 "name": "Nvme$subsystem", 00:30:11.032 "trtype": "$TEST_TRANSPORT", 00:30:11.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.032 "adrfam": "ipv4", 00:30:11.032 "trsvcid": "$NVMF_PORT", 00:30:11.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.032 "hdgst": ${hdgst:-false}, 00:30:11.032 "ddgst": ${ddgst:-false} 00:30:11.032 }, 00:30:11.032 "method": "bdev_nvme_attach_controller" 00:30:11.032 } 00:30:11.032 EOF 00:30:11.032 )") 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:11.032 "params": { 00:30:11.032 "name": "Nvme0", 00:30:11.032 "trtype": "tcp", 00:30:11.032 "traddr": "10.0.0.3", 00:30:11.032 "adrfam": "ipv4", 00:30:11.032 "trsvcid": "4420", 00:30:11.032 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:11.032 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:11.032 "hdgst": true, 00:30:11.032 "ddgst": true 00:30:11.032 }, 00:30:11.032 "method": "bdev_nvme_attach_controller" 00:30:11.032 }' 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:11.032 11:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.291 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:11.291 ... 00:30:11.291 fio-3.35 00:30:11.291 Starting 3 threads 00:30:23.520 00:30:23.520 filename0: (groupid=0, jobs=1): err= 0: pid=90941: Tue Dec 10 11:33:28 2024 00:30:23.520 read: IOPS=177, BW=22.2MiB/s (23.2MB/s)(222MiB/10013msec) 00:30:23.520 slat (nsec): min=8433, max=90755, avg=21184.16, stdev=6772.75 00:30:23.520 clat (usec): min=16569, max=22819, avg=16863.80, stdev=650.09 00:30:23.520 lat (usec): min=16589, max=22849, avg=16884.99, stdev=650.16 00:30:23.520 clat percentiles (usec): 00:30:23.520 | 1.00th=[16581], 5.00th=[16712], 10.00th=[16712], 20.00th=[16712], 00:30:23.520 | 30.00th=[16712], 40.00th=[16712], 50.00th=[16712], 60.00th=[16712], 00:30:23.520 | 70.00th=[16909], 80.00th=[16909], 90.00th=[16909], 95.00th=[17171], 00:30:23.520 | 99.00th=[21890], 99.50th=[22676], 99.90th=[22676], 99.95th=[22938], 00:30:23.520 | 99.99th=[22938] 00:30:23.520 bw ( KiB/s): min=20736, max=23040, per=33.33%, avg=22692.00, stdev=578.74, samples=20 00:30:23.520 iops : min= 162, max= 180, avg=177.20, stdev= 4.50, samples=20 00:30:23.520 lat (msec) : 20=98.65%, 50=1.35% 00:30:23.520 cpu : usr=91.95%, sys=7.41%, ctx=22, majf=0, minf=1072 00:30:23.520 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:23.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.520 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:23.520 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:23.520 filename0: (groupid=0, jobs=1): err= 0: pid=90942: Tue Dec 10 11:33:28 2024 00:30:23.521 read: IOPS=177, BW=22.2MiB/s (23.2MB/s)(222MiB/10017msec) 00:30:23.521 slat (nsec): min=5497, max=89425, avg=20530.79, stdev=6321.15 00:30:23.521 clat (usec): min=16549, max=23525, avg=16872.50, stdev=696.63 00:30:23.521 lat (usec): min=16562, max=23563, avg=16893.03, stdev=696.97 00:30:23.521 clat percentiles (usec): 00:30:23.521 | 1.00th=[16581], 5.00th=[16581], 10.00th=[16712], 20.00th=[16712], 00:30:23.521 | 30.00th=[16712], 40.00th=[16712], 50.00th=[16712], 60.00th=[16712], 00:30:23.521 | 70.00th=[16909], 80.00th=[16909], 90.00th=[16909], 95.00th=[17171], 00:30:23.521 | 99.00th=[22676], 99.50th=[22676], 99.90th=[23462], 99.95th=[23462], 00:30:23.521 | 99.99th=[23462] 00:30:23.521 bw ( KiB/s): min=21504, max=23040, per=33.33%, avg=22689.85, stdev=465.05, samples=20 00:30:23.521 iops : min= 168, max= 180, avg=177.20, stdev= 3.65, samples=20 00:30:23.521 lat (msec) : 20=98.48%, 50=1.52% 00:30:23.521 cpu : usr=92.12%, sys=7.28%, ctx=8, majf=0, minf=1075 00:30:23.521 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:23.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.521 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:23.521 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:23.521 filename0: (groupid=0, jobs=1): err= 0: pid=90943: Tue Dec 10 11:33:28 2024 00:30:23.521 read: IOPS=177, BW=22.2MiB/s (23.2MB/s)(222MiB/10015msec) 00:30:23.521 slat (nsec): min=7852, max=90578, avg=21537.05, stdev=7048.10 00:30:23.521 clat (usec): min=16541, max=22837, avg=16865.24, stdev=667.47 00:30:23.521 lat (usec): min=16561, max=22862, avg=16886.78, stdev=667.66 00:30:23.521 clat percentiles (usec): 00:30:23.521 | 1.00th=[16581], 5.00th=[16581], 10.00th=[16712], 20.00th=[16712], 00:30:23.521 | 30.00th=[16712], 40.00th=[16712], 50.00th=[16712], 60.00th=[16712], 00:30:23.521 | 70.00th=[16909], 80.00th=[16909], 90.00th=[16909], 95.00th=[17171], 00:30:23.521 | 99.00th=[21890], 99.50th=[22676], 99.90th=[22938], 99.95th=[22938], 00:30:23.521 | 99.99th=[22938] 00:30:23.521 bw ( KiB/s): min=20736, max=23040, per=33.33%, avg=22689.80, stdev=580.33, samples=20 00:30:23.521 iops : min= 162, max= 180, avg=177.20, stdev= 4.50, samples=20 00:30:23.521 lat (msec) : 20=98.48%, 50=1.52% 00:30:23.521 cpu : usr=91.64%, sys=7.53%, ctx=78, majf=0, minf=1074 00:30:23.521 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:23.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.521 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:23.521 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:23.521 00:30:23.521 Run status group 0 (all jobs): 00:30:23.521 READ: bw=66.5MiB/s (69.7MB/s), 22.2MiB/s-22.2MiB/s (23.2MB/s-23.2MB/s), io=666MiB (698MB), run=10013-10017msec 00:30:23.521 ----------------------------------------------------- 00:30:23.521 Suppressions used: 00:30:23.521 count bytes template 00:30:23.521 5 44 /usr/src/fio/parse.c 00:30:23.521 1 8 libtcmalloc_minimal.so 00:30:23.521 1 904 libcrypto.so 00:30:23.521 ----------------------------------------------------- 00:30:23.521 00:30:23.521 11:33:30 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:23.521 11:33:30 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:23.521 11:33:30 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:23.521 11:33:30 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:23.521 11:33:30 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:23.521 11:33:30 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:23.521 11:33:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.521 11:33:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:23.521 11:33:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.521 11:33:30 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:23.521 11:33:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.521 11:33:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:23.521 ************************************ 00:30:23.521 END TEST fio_dif_digest 00:30:23.521 ************************************ 00:30:23.521 11:33:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.521 00:30:23.521 real 0m12.542s 00:30:23.521 user 0m29.688s 00:30:23.521 sys 0m2.601s 00:30:23.521 11:33:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.521 11:33:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:23.521 11:33:30 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:23.521 11:33:30 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:23.521 11:33:30 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.521 11:33:30 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:30:23.521 11:33:30 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.521 11:33:30 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:30:23.521 11:33:30 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.521 11:33:30 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.521 rmmod nvme_tcp 00:30:23.521 rmmod nvme_fabrics 00:30:23.521 rmmod nvme_keyring 00:30:23.521 11:33:30 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.521 11:33:30 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:30:23.521 11:33:30 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:30:23.521 11:33:30 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 90178 ']' 00:30:23.521 11:33:30 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 90178 00:30:23.521 11:33:30 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 90178 ']' 00:30:23.521 11:33:30 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 90178 00:30:23.521 11:33:30 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:30:23.521 11:33:30 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.521 11:33:30 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90178 00:30:23.780 11:33:30 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:23.780 11:33:30 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:23.780 killing process with pid 90178 00:30:23.780 11:33:30 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90178' 00:30:23.780 11:33:30 nvmf_dif -- common/autotest_common.sh@973 -- # kill 90178 00:30:23.780 11:33:30 nvmf_dif -- common/autotest_common.sh@978 -- # wait 90178 00:30:24.716 11:33:31 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:30:24.716 11:33:31 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:24.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:25.232 Waiting for block devices as requested 00:30:25.232 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:25.232 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:25.232 11:33:32 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:25.232 11:33:32 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:25.232 11:33:32 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:30:25.232 11:33:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:30:25.232 11:33:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:30:25.232 11:33:32 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:25.232 11:33:32 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:25.232 11:33:32 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:25.232 11:33:32 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:25.232 11:33:32 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:25.497 11:33:32 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:25.497 11:33:32 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:25.497 11:33:32 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:25.497 11:33:32 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:25.497 11:33:32 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:25.497 11:33:32 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:25.497 11:33:32 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:25.497 11:33:32 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:25.497 11:33:32 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:25.497 11:33:32 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:25.497 11:33:32 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:25.497 11:33:32 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:25.497 11:33:32 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.497 11:33:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:25.497 11:33:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.497 11:33:32 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:30:25.497 00:30:25.497 real 1m10.236s 00:30:25.497 user 4m7.442s 00:30:25.497 sys 0m20.010s 00:30:25.497 11:33:32 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.497 11:33:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:25.497 ************************************ 00:30:25.497 END TEST nvmf_dif 00:30:25.497 ************************************ 00:30:25.790 11:33:32 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:25.790 11:33:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:25.790 11:33:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:25.790 11:33:32 -- common/autotest_common.sh@10 -- # set +x 00:30:25.790 ************************************ 00:30:25.790 START TEST nvmf_abort_qd_sizes 00:30:25.790 ************************************ 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:25.790 * Looking for test storage... 00:30:25.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:25.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.790 --rc genhtml_branch_coverage=1 00:30:25.790 --rc genhtml_function_coverage=1 00:30:25.790 --rc genhtml_legend=1 00:30:25.790 --rc geninfo_all_blocks=1 00:30:25.790 --rc geninfo_unexecuted_blocks=1 00:30:25.790 00:30:25.790 ' 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:25.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.790 --rc genhtml_branch_coverage=1 00:30:25.790 --rc genhtml_function_coverage=1 00:30:25.790 --rc genhtml_legend=1 00:30:25.790 --rc geninfo_all_blocks=1 00:30:25.790 --rc geninfo_unexecuted_blocks=1 00:30:25.790 00:30:25.790 ' 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:25.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.790 --rc genhtml_branch_coverage=1 00:30:25.790 --rc genhtml_function_coverage=1 00:30:25.790 --rc genhtml_legend=1 00:30:25.790 --rc geninfo_all_blocks=1 00:30:25.790 --rc geninfo_unexecuted_blocks=1 00:30:25.790 00:30:25.790 ' 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:25.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.790 --rc genhtml_branch_coverage=1 00:30:25.790 --rc genhtml_function_coverage=1 00:30:25.790 --rc genhtml_legend=1 00:30:25.790 --rc geninfo_all_blocks=1 00:30:25.790 --rc geninfo_unexecuted_blocks=1 00:30:25.790 00:30:25.790 ' 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:25.790 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:25.790 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:25.791 Cannot find device "nvmf_init_br" 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:30:25.791 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:26.049 Cannot find device "nvmf_init_br2" 00:30:26.049 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:30:26.049 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:26.049 Cannot find device "nvmf_tgt_br" 00:30:26.049 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:30:26.049 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:26.049 Cannot find device "nvmf_tgt_br2" 00:30:26.049 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:30:26.049 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:26.049 Cannot find device "nvmf_init_br" 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:26.050 Cannot find device "nvmf_init_br2" 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:26.050 Cannot find device "nvmf_tgt_br" 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:26.050 Cannot find device "nvmf_tgt_br2" 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:26.050 Cannot find device "nvmf_br" 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:26.050 Cannot find device "nvmf_init_if" 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:26.050 Cannot find device "nvmf_init_if2" 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:26.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:26.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:26.050 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:26.308 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:26.309 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:26.309 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:26.309 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:26.309 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:26.309 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:26.309 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:26.309 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:26.309 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:26.309 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:26.309 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:30:26.309 00:30:26.309 --- 10.0.0.3 ping statistics --- 00:30:26.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.309 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:30:26.309 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:26.309 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:26.309 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:30:26.309 00:30:26.309 --- 10.0.0.4 ping statistics --- 00:30:26.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.309 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:30:26.309 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:26.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:30:26.309 00:30:26.309 --- 10.0.0.1 ping statistics --- 00:30:26.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.309 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:30:26.309 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:26.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:30:26.309 00:30:26.309 --- 10.0.0.2 ping statistics --- 00:30:26.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.309 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:30:26.309 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.309 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:30:26.309 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:30:26.309 11:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:26.877 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:27.135 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:27.135 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=91610 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 91610 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 91610 ']' 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.135 11:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:27.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.136 11:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.136 11:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:27.136 11:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:27.394 [2024-12-10 11:33:33.988200] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:30:27.394 [2024-12-10 11:33:33.988410] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.394 [2024-12-10 11:33:34.175423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:27.653 [2024-12-10 11:33:34.285198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.653 [2024-12-10 11:33:34.285262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.653 [2024-12-10 11:33:34.285283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.653 [2024-12-10 11:33:34.285296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.653 [2024-12-10 11:33:34.285310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.653 [2024-12-10 11:33:34.287121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.653 [2024-12-10 11:33:34.287262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:27.653 [2024-12-10 11:33:34.287441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.653 [2024-12-10 11:33:34.287893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:27.653 [2024-12-10 11:33:34.473218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:30:28.218 11:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:28.219 11:33:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:30:28.219 11:33:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:30:28.219 11:33:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:28.219 11:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:28.219 11:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:28.219 11:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:28.219 ************************************ 00:30:28.219 START TEST spdk_target_abort 00:30:28.219 ************************************ 00:30:28.219 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:30:28.219 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:28.219 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:30:28.219 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.219 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:28.476 spdk_targetn1 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:28.476 [2024-12-10 11:33:35.075784] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:28.476 [2024-12-10 11:33:35.125966] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:28.476 11:33:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:31.760 Initializing NVMe Controllers 00:30:31.760 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:30:31.760 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:31.760 Initialization complete. Launching workers. 00:30:31.760 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9433, failed: 0 00:30:31.760 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1021, failed to submit 8412 00:30:31.760 success 834, unsuccessful 187, failed 0 00:30:31.760 11:33:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:31.760 11:33:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:35.945 Initializing NVMe Controllers 00:30:35.945 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:30:35.945 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:35.945 Initialization complete. Launching workers. 00:30:35.945 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8952, failed: 0 00:30:35.945 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1131, failed to submit 7821 00:30:35.945 success 401, unsuccessful 730, failed 0 00:30:35.945 11:33:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:35.945 11:33:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:39.227 Initializing NVMe Controllers 00:30:39.227 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:30:39.227 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:39.227 Initialization complete. Launching workers. 00:30:39.227 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27846, failed: 0 00:30:39.227 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2191, failed to submit 25655 00:30:39.228 success 371, unsuccessful 1820, failed 0 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 91610 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 91610 ']' 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 91610 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91610 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:39.228 killing process with pid 91610 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91610' 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 91610 00:30:39.228 11:33:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 91610 00:30:39.794 00:30:39.794 real 0m11.618s 00:30:39.794 user 0m44.748s 00:30:39.794 sys 0m2.667s 00:30:39.794 11:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.795 11:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.053 ************************************ 00:30:40.053 END TEST spdk_target_abort 00:30:40.053 ************************************ 00:30:40.053 11:33:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:40.053 11:33:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:40.053 11:33:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:40.053 11:33:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:40.053 ************************************ 00:30:40.053 START TEST kernel_target_abort 00:30:40.053 ************************************ 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:40.053 11:33:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:40.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:40.312 Waiting for block devices as requested 00:30:40.312 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:40.570 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:40.828 No valid GPT data, bailing 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:30:40.828 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:30:40.829 No valid GPT data, bailing 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:30:40.829 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:30:41.087 No valid GPT data, bailing 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:41.087 No valid GPT data, bailing 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:41.087 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 --hostid=20cf3ff5-7c8b-4175-aa20-a641780c6f81 -a 10.0.0.1 -t tcp -s 4420 00:30:41.087 00:30:41.087 Discovery Log Number of Records 2, Generation counter 2 00:30:41.087 =====Discovery Log Entry 0====== 00:30:41.087 trtype: tcp 00:30:41.087 adrfam: ipv4 00:30:41.087 subtype: current discovery subsystem 00:30:41.087 treq: not specified, sq flow control disable supported 00:30:41.087 portid: 1 00:30:41.087 trsvcid: 4420 00:30:41.087 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:41.087 traddr: 10.0.0.1 00:30:41.087 eflags: none 00:30:41.087 sectype: none 00:30:41.087 =====Discovery Log Entry 1====== 00:30:41.087 trtype: tcp 00:30:41.087 adrfam: ipv4 00:30:41.087 subtype: nvme subsystem 00:30:41.087 treq: not specified, sq flow control disable supported 00:30:41.087 portid: 1 00:30:41.087 trsvcid: 4420 00:30:41.087 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:41.087 traddr: 10.0.0.1 00:30:41.087 eflags: none 00:30:41.088 sectype: none 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:41.088 11:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:44.373 Initializing NVMe Controllers 00:30:44.374 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:44.374 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:44.374 Initialization complete. Launching workers. 00:30:44.374 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27440, failed: 0 00:30:44.374 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27440, failed to submit 0 00:30:44.374 success 0, unsuccessful 27440, failed 0 00:30:44.374 11:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:44.374 11:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:47.661 Initializing NVMe Controllers 00:30:47.661 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:47.661 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:47.661 Initialization complete. Launching workers. 00:30:47.661 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57920, failed: 0 00:30:47.661 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25207, failed to submit 32713 00:30:47.661 success 0, unsuccessful 25207, failed 0 00:30:47.661 11:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:47.661 11:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:50.947 Initializing NVMe Controllers 00:30:50.947 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:50.947 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:50.947 Initialization complete. Launching workers. 00:30:50.947 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64133, failed: 0 00:30:50.947 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16036, failed to submit 48097 00:30:50.947 success 0, unsuccessful 16036, failed 0 00:30:50.947 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:50.947 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:50.947 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:30:50.947 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:50.947 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:50.947 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:50.947 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:50.947 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:50.947 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:50.947 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:51.897 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:52.474 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:52.474 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:52.474 00:30:52.474 real 0m12.503s 00:30:52.474 user 0m7.069s 00:30:52.474 sys 0m3.239s 00:30:52.474 11:33:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:52.474 11:33:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:52.474 ************************************ 00:30:52.474 END TEST kernel_target_abort 00:30:52.474 ************************************ 00:30:52.474 11:33:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:52.474 11:33:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:52.474 11:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:52.474 11:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:30:52.732 11:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:52.732 11:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:30:52.732 11:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:52.732 11:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:52.732 rmmod nvme_tcp 00:30:52.732 rmmod nvme_fabrics 00:30:52.732 rmmod nvme_keyring 00:30:52.732 11:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:52.732 11:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:30:52.732 11:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:30:52.732 11:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 91610 ']' 00:30:52.732 11:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 91610 00:30:52.732 11:33:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 91610 ']' 00:30:52.732 11:33:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 91610 00:30:52.732 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (91610) - No such process 00:30:52.732 Process with pid 91610 is not found 00:30:52.732 11:33:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 91610 is not found' 00:30:52.732 11:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:30:52.732 11:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:52.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:52.990 Waiting for block devices as requested 00:30:53.250 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:53.250 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:53.250 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:53.250 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:53.250 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:30:53.250 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:53.250 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:30:53.250 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:30:53.250 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:53.250 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:53.250 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:53.250 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:53.250 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:53.250 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:30:53.509 00:30:53.509 real 0m27.944s 00:30:53.509 user 0m53.166s 00:30:53.509 sys 0m7.397s 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:53.509 11:34:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:53.509 ************************************ 00:30:53.509 END TEST nvmf_abort_qd_sizes 00:30:53.509 ************************************ 00:30:53.509 11:34:00 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:53.509 11:34:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:53.509 11:34:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:53.509 11:34:00 -- common/autotest_common.sh@10 -- # set +x 00:30:53.509 ************************************ 00:30:53.509 START TEST keyring_file 00:30:53.509 ************************************ 00:30:53.509 11:34:00 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:53.768 * Looking for test storage... 00:30:53.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:30:53.768 11:34:00 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:53.768 11:34:00 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:30:53.768 11:34:00 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:53.768 11:34:00 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:53.768 11:34:00 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:53.768 11:34:00 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:53.768 11:34:00 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:53.768 11:34:00 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:30:53.768 11:34:00 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:30:53.768 11:34:00 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:30:53.768 11:34:00 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:30:53.768 11:34:00 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:30:53.768 11:34:00 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:30:53.768 11:34:00 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:30:53.768 11:34:00 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@345 -- # : 1 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@353 -- # local d=1 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@355 -- # echo 1 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@353 -- # local d=2 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@355 -- # echo 2 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@368 -- # return 0 00:30:53.769 11:34:00 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:53.769 11:34:00 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:53.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.769 --rc genhtml_branch_coverage=1 00:30:53.769 --rc genhtml_function_coverage=1 00:30:53.769 --rc genhtml_legend=1 00:30:53.769 --rc geninfo_all_blocks=1 00:30:53.769 --rc geninfo_unexecuted_blocks=1 00:30:53.769 00:30:53.769 ' 00:30:53.769 11:34:00 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:53.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.769 --rc genhtml_branch_coverage=1 00:30:53.769 --rc genhtml_function_coverage=1 00:30:53.769 --rc genhtml_legend=1 00:30:53.769 --rc geninfo_all_blocks=1 00:30:53.769 --rc geninfo_unexecuted_blocks=1 00:30:53.769 00:30:53.769 ' 00:30:53.769 11:34:00 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:53.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.769 --rc genhtml_branch_coverage=1 00:30:53.769 --rc genhtml_function_coverage=1 00:30:53.769 --rc genhtml_legend=1 00:30:53.769 --rc geninfo_all_blocks=1 00:30:53.769 --rc geninfo_unexecuted_blocks=1 00:30:53.769 00:30:53.769 ' 00:30:53.769 11:34:00 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:53.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.769 --rc genhtml_branch_coverage=1 00:30:53.769 --rc genhtml_function_coverage=1 00:30:53.769 --rc genhtml_legend=1 00:30:53.769 --rc geninfo_all_blocks=1 00:30:53.769 --rc geninfo_unexecuted_blocks=1 00:30:53.769 00:30:53.769 ' 00:30:53.769 11:34:00 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:30:53.769 11:34:00 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:53.769 11:34:00 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:53.769 11:34:00 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.769 11:34:00 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.769 11:34:00 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.769 11:34:00 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:53.769 11:34:00 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@51 -- # : 0 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:53.769 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:53.769 11:34:00 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:53.769 11:34:00 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:53.769 11:34:00 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:53.769 11:34:00 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:53.769 11:34:00 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:53.769 11:34:00 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:53.769 11:34:00 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:53.769 11:34:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:53.769 11:34:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:53.769 11:34:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:53.769 11:34:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:53.769 11:34:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:53.769 11:34:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Dhr286mCu5 00:30:53.769 11:34:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:30:53.769 11:34:00 keyring_file -- nvmf/common.sh@733 -- # python - 00:30:54.029 11:34:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Dhr286mCu5 00:30:54.029 11:34:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Dhr286mCu5 00:30:54.029 11:34:00 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Dhr286mCu5 00:30:54.029 11:34:00 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:54.029 11:34:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:54.029 11:34:00 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:54.029 11:34:00 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:54.029 11:34:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:54.029 11:34:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:54.029 11:34:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.iqgzTqEXF4 00:30:54.029 11:34:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:54.029 11:34:00 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:54.029 11:34:00 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:30:54.029 11:34:00 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:30:54.029 11:34:00 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:30:54.029 11:34:00 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:30:54.029 11:34:00 keyring_file -- nvmf/common.sh@733 -- # python - 00:30:54.029 11:34:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.iqgzTqEXF4 00:30:54.029 11:34:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.iqgzTqEXF4 00:30:54.029 11:34:00 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.iqgzTqEXF4 00:30:54.029 11:34:00 keyring_file -- keyring/file.sh@30 -- # tgtpid=92637 00:30:54.029 11:34:00 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:54.029 11:34:00 keyring_file -- keyring/file.sh@32 -- # waitforlisten 92637 00:30:54.029 11:34:00 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 92637 ']' 00:30:54.029 11:34:00 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.029 11:34:00 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:54.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.029 11:34:00 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.029 11:34:00 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:54.029 11:34:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:54.029 [2024-12-10 11:34:00.791713] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:30:54.029 [2024-12-10 11:34:00.791923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92637 ] 00:30:54.288 [2024-12-10 11:34:00.983583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.546 [2024-12-10 11:34:01.123961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.546 [2024-12-10 11:34:01.367966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:55.113 11:34:01 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:55.113 11:34:01 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:30:55.113 11:34:01 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:55.113 11:34:01 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.113 11:34:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:55.113 [2024-12-10 11:34:01.913791] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.113 null0 00:30:55.372 [2024-12-10 11:34:01.945779] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:55.372 [2024-12-10 11:34:01.946125] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.372 11:34:01 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:55.372 [2024-12-10 11:34:01.973773] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:55.372 request: 00:30:55.372 { 00:30:55.372 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.372 "secure_channel": false, 00:30:55.372 "listen_address": { 00:30:55.372 "trtype": "tcp", 00:30:55.372 "traddr": "127.0.0.1", 00:30:55.372 "trsvcid": "4420" 00:30:55.372 }, 00:30:55.372 "method": "nvmf_subsystem_add_listener", 00:30:55.372 "req_id": 1 00:30:55.372 } 00:30:55.372 Got JSON-RPC error response 00:30:55.372 response: 00:30:55.372 { 00:30:55.372 "code": -32602, 00:30:55.372 "message": "Invalid parameters" 00:30:55.372 } 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:55.372 11:34:01 keyring_file -- keyring/file.sh@47 -- # bperfpid=92659 00:30:55.372 11:34:01 keyring_file -- keyring/file.sh@49 -- # waitforlisten 92659 /var/tmp/bperf.sock 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 92659 ']' 00:30:55.372 11:34:01 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:55.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.372 11:34:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:55.372 [2024-12-10 11:34:02.092828] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:30:55.373 [2024-12-10 11:34:02.092994] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92659 ] 00:30:55.632 [2024-12-10 11:34:02.277597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.632 [2024-12-10 11:34:02.403787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.891 [2024-12-10 11:34:02.605406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:56.458 11:34:03 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.458 11:34:03 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:30:56.458 11:34:03 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Dhr286mCu5 00:30:56.458 11:34:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Dhr286mCu5 00:30:56.717 11:34:03 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.iqgzTqEXF4 00:30:56.717 11:34:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.iqgzTqEXF4 00:30:56.976 11:34:03 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:56.976 11:34:03 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:30:56.976 11:34:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:56.976 11:34:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:56.976 11:34:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.235 11:34:04 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Dhr286mCu5 == \/\t\m\p\/\t\m\p\.\D\h\r\2\8\6\m\C\u\5 ]] 00:30:57.494 11:34:04 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:30:57.494 11:34:04 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:30:57.494 11:34:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.494 11:34:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.494 11:34:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:57.753 11:34:04 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.iqgzTqEXF4 == \/\t\m\p\/\t\m\p\.\i\q\g\z\T\q\E\X\F\4 ]] 00:30:57.753 11:34:04 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:30:57.753 11:34:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:57.753 11:34:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:57.753 11:34:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:57.753 11:34:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.753 11:34:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.012 11:34:04 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:58.012 11:34:04 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:30:58.012 11:34:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:58.012 11:34:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:58.012 11:34:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:58.012 11:34:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:58.012 11:34:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.271 11:34:04 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:30:58.271 11:34:04 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:58.271 11:34:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:58.530 [2024-12-10 11:34:05.186138] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:58.530 nvme0n1 00:30:58.530 11:34:05 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:30:58.530 11:34:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:58.530 11:34:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:58.530 11:34:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.530 11:34:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:58.530 11:34:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:59.097 11:34:05 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:30:59.097 11:34:05 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:30:59.097 11:34:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:59.097 11:34:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:59.097 11:34:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:59.097 11:34:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:59.097 11:34:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:59.097 11:34:05 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:30:59.097 11:34:05 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:59.356 Running I/O for 1 seconds... 00:31:00.293 8078.00 IOPS, 31.55 MiB/s 00:31:00.293 Latency(us) 00:31:00.293 [2024-12-10T11:34:07.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.293 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:00.293 nvme0n1 : 1.01 8130.90 31.76 0.00 0.00 15681.37 7268.54 28359.21 00:31:00.293 [2024-12-10T11:34:07.119Z] =================================================================================================================== 00:31:00.293 [2024-12-10T11:34:07.119Z] Total : 8130.90 31.76 0.00 0.00 15681.37 7268.54 28359.21 00:31:00.293 { 00:31:00.293 "results": [ 00:31:00.293 { 00:31:00.293 "job": "nvme0n1", 00:31:00.293 "core_mask": "0x2", 00:31:00.293 "workload": "randrw", 00:31:00.293 "percentage": 50, 00:31:00.293 "status": "finished", 00:31:00.293 "queue_depth": 128, 00:31:00.293 "io_size": 4096, 00:31:00.293 "runtime": 1.009482, 00:31:00.293 "iops": 8130.902779841543, 00:31:00.293 "mibps": 31.761338983756026, 00:31:00.293 "io_failed": 0, 00:31:00.293 "io_timeout": 0, 00:31:00.293 "avg_latency_us": 15681.3686904129, 00:31:00.293 "min_latency_us": 7268.538181818182, 00:31:00.293 "max_latency_us": 28359.214545454546 00:31:00.293 } 00:31:00.293 ], 00:31:00.293 "core_count": 1 00:31:00.293 } 00:31:00.293 11:34:07 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:00.293 11:34:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:00.881 11:34:07 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:31:00.881 11:34:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:00.881 11:34:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:00.881 11:34:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:00.881 11:34:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:00.881 11:34:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:01.169 11:34:07 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:01.169 11:34:07 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:31:01.169 11:34:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:01.169 11:34:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:01.169 11:34:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:01.169 11:34:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:01.169 11:34:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:01.427 11:34:08 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:31:01.427 11:34:08 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:01.427 11:34:08 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:31:01.427 11:34:08 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:01.427 11:34:08 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:31:01.427 11:34:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:01.427 11:34:08 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:31:01.427 11:34:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:01.427 11:34:08 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:01.427 11:34:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:01.685 [2024-12-10 11:34:08.314495] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:01.685 [2024-12-10 11:34:08.315048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:31:01.685 [2024-12-10 11:34:08.316010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:31:01.685 [2024-12-10 11:34:08.317001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:31:01.685 [2024-12-10 11:34:08.317046] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:01.685 [2024-12-10 11:34:08.317065] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:31:01.685 [2024-12-10 11:34:08.317081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:31:01.685 request: 00:31:01.685 { 00:31:01.685 "name": "nvme0", 00:31:01.685 "trtype": "tcp", 00:31:01.685 "traddr": "127.0.0.1", 00:31:01.685 "adrfam": "ipv4", 00:31:01.685 "trsvcid": "4420", 00:31:01.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.685 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:01.685 "prchk_reftag": false, 00:31:01.685 "prchk_guard": false, 00:31:01.685 "hdgst": false, 00:31:01.685 "ddgst": false, 00:31:01.685 "psk": "key1", 00:31:01.685 "allow_unrecognized_csi": false, 00:31:01.685 "method": "bdev_nvme_attach_controller", 00:31:01.685 "req_id": 1 00:31:01.685 } 00:31:01.685 Got JSON-RPC error response 00:31:01.685 response: 00:31:01.685 { 00:31:01.685 "code": -5, 00:31:01.685 "message": "Input/output error" 00:31:01.685 } 00:31:01.685 11:34:08 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:31:01.685 11:34:08 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:01.685 11:34:08 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:01.685 11:34:08 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:01.685 11:34:08 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:31:01.685 11:34:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:01.685 11:34:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:01.685 11:34:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:01.685 11:34:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:01.685 11:34:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:01.944 11:34:08 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:01.944 11:34:08 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:31:01.944 11:34:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:01.944 11:34:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:01.944 11:34:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:01.944 11:34:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:01.944 11:34:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:02.203 11:34:08 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:31:02.203 11:34:08 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:31:02.203 11:34:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:02.463 11:34:09 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:31:02.463 11:34:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:02.722 11:34:09 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:31:02.722 11:34:09 keyring_file -- keyring/file.sh@78 -- # jq length 00:31:02.722 11:34:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:02.982 11:34:09 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:31:02.982 11:34:09 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Dhr286mCu5 00:31:02.982 11:34:09 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Dhr286mCu5 00:31:02.982 11:34:09 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:31:02.982 11:34:09 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Dhr286mCu5 00:31:02.982 11:34:09 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:31:02.982 11:34:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:02.982 11:34:09 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:31:02.982 11:34:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:02.982 11:34:09 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Dhr286mCu5 00:31:02.982 11:34:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Dhr286mCu5 00:31:03.241 [2024-12-10 11:34:10.005134] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Dhr286mCu5': 0100660 00:31:03.241 [2024-12-10 11:34:10.005426] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:03.241 request: 00:31:03.241 { 00:31:03.241 "name": "key0", 00:31:03.241 "path": "/tmp/tmp.Dhr286mCu5", 00:31:03.241 "method": "keyring_file_add_key", 00:31:03.241 "req_id": 1 00:31:03.241 } 00:31:03.241 Got JSON-RPC error response 00:31:03.241 response: 00:31:03.241 { 00:31:03.241 "code": -1, 00:31:03.241 "message": "Operation not permitted" 00:31:03.241 } 00:31:03.241 11:34:10 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:31:03.241 11:34:10 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:03.241 11:34:10 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:03.241 11:34:10 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:03.241 11:34:10 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Dhr286mCu5 00:31:03.241 11:34:10 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Dhr286mCu5 00:31:03.241 11:34:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Dhr286mCu5 00:31:03.808 11:34:10 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Dhr286mCu5 00:31:03.808 11:34:10 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:31:03.808 11:34:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:03.808 11:34:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:03.808 11:34:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:03.808 11:34:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:03.808 11:34:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:04.067 11:34:10 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:31:04.067 11:34:10 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:04.067 11:34:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:31:04.067 11:34:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:04.067 11:34:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:31:04.067 11:34:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:04.067 11:34:10 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:31:04.067 11:34:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:04.067 11:34:10 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:04.067 11:34:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:04.326 [2024-12-10 11:34:10.968714] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Dhr286mCu5': No such file or directory 00:31:04.326 [2024-12-10 11:34:10.968777] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:04.326 [2024-12-10 11:34:10.968807] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:04.326 [2024-12-10 11:34:10.968822] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:31:04.326 [2024-12-10 11:34:10.968837] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:04.326 [2024-12-10 11:34:10.968856] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:04.326 request: 00:31:04.326 { 00:31:04.326 "name": "nvme0", 00:31:04.326 "trtype": "tcp", 00:31:04.326 "traddr": "127.0.0.1", 00:31:04.326 "adrfam": "ipv4", 00:31:04.326 "trsvcid": "4420", 00:31:04.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:04.326 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:04.326 "prchk_reftag": false, 00:31:04.326 "prchk_guard": false, 00:31:04.326 "hdgst": false, 00:31:04.326 "ddgst": false, 00:31:04.326 "psk": "key0", 00:31:04.326 "allow_unrecognized_csi": false, 00:31:04.326 "method": "bdev_nvme_attach_controller", 00:31:04.326 "req_id": 1 00:31:04.326 } 00:31:04.326 Got JSON-RPC error response 00:31:04.326 response: 00:31:04.326 { 00:31:04.326 "code": -19, 00:31:04.326 "message": "No such device" 00:31:04.326 } 00:31:04.326 11:34:10 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:31:04.326 11:34:10 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:04.327 11:34:10 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:04.327 11:34:10 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:04.327 11:34:10 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:31:04.327 11:34:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:04.586 11:34:11 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:04.586 11:34:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:04.586 11:34:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:04.586 11:34:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:04.586 11:34:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:04.586 11:34:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:04.586 11:34:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1gzCm9aXGB 00:31:04.586 11:34:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:04.586 11:34:11 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:04.586 11:34:11 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:31:04.586 11:34:11 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:31:04.586 11:34:11 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:31:04.586 11:34:11 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:31:04.586 11:34:11 keyring_file -- nvmf/common.sh@733 -- # python - 00:31:04.586 11:34:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1gzCm9aXGB 00:31:04.586 11:34:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1gzCm9aXGB 00:31:04.586 11:34:11 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.1gzCm9aXGB 00:31:04.586 11:34:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1gzCm9aXGB 00:31:04.586 11:34:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1gzCm9aXGB 00:31:04.845 11:34:11 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:04.845 11:34:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:05.411 nvme0n1 00:31:05.411 11:34:11 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:31:05.411 11:34:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:05.411 11:34:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:05.411 11:34:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:05.411 11:34:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:05.411 11:34:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:05.670 11:34:12 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:31:05.670 11:34:12 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:31:05.670 11:34:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:05.930 11:34:12 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:31:05.930 11:34:12 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:31:05.930 11:34:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:05.930 11:34:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:05.930 11:34:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:06.189 11:34:12 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:31:06.189 11:34:12 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:31:06.189 11:34:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:06.189 11:34:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:06.189 11:34:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:06.189 11:34:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:06.189 11:34:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:06.447 11:34:13 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:31:06.447 11:34:13 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:06.447 11:34:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:06.706 11:34:13 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:31:06.706 11:34:13 keyring_file -- keyring/file.sh@105 -- # jq length 00:31:06.706 11:34:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:06.965 11:34:13 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:31:06.965 11:34:13 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1gzCm9aXGB 00:31:06.965 11:34:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1gzCm9aXGB 00:31:07.533 11:34:14 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.iqgzTqEXF4 00:31:07.533 11:34:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.iqgzTqEXF4 00:31:07.533 11:34:14 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:07.533 11:34:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:07.844 nvme0n1 00:31:07.844 11:34:14 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:31:07.844 11:34:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:08.412 11:34:14 keyring_file -- keyring/file.sh@113 -- # config='{ 00:31:08.412 "subsystems": [ 00:31:08.412 { 00:31:08.412 "subsystem": "keyring", 00:31:08.412 "config": [ 00:31:08.412 { 00:31:08.412 "method": "keyring_file_add_key", 00:31:08.412 "params": { 00:31:08.412 "name": "key0", 00:31:08.412 "path": "/tmp/tmp.1gzCm9aXGB" 00:31:08.412 } 00:31:08.412 }, 00:31:08.412 { 00:31:08.412 "method": "keyring_file_add_key", 00:31:08.412 "params": { 00:31:08.412 "name": "key1", 00:31:08.412 "path": "/tmp/tmp.iqgzTqEXF4" 00:31:08.412 } 00:31:08.412 } 00:31:08.412 ] 00:31:08.412 }, 00:31:08.412 { 00:31:08.412 "subsystem": "iobuf", 00:31:08.412 "config": [ 00:31:08.412 { 00:31:08.412 "method": "iobuf_set_options", 00:31:08.412 "params": { 00:31:08.412 "small_pool_count": 8192, 00:31:08.412 "large_pool_count": 1024, 00:31:08.412 "small_bufsize": 8192, 00:31:08.412 "large_bufsize": 135168, 00:31:08.412 "enable_numa": false 00:31:08.412 } 00:31:08.412 } 00:31:08.412 ] 00:31:08.412 }, 00:31:08.412 { 00:31:08.412 "subsystem": "sock", 00:31:08.412 "config": [ 00:31:08.412 { 00:31:08.412 "method": "sock_set_default_impl", 00:31:08.412 "params": { 00:31:08.412 "impl_name": "uring" 00:31:08.412 } 00:31:08.412 }, 00:31:08.412 { 00:31:08.412 "method": "sock_impl_set_options", 00:31:08.412 "params": { 00:31:08.412 "impl_name": "ssl", 00:31:08.412 "recv_buf_size": 4096, 00:31:08.412 "send_buf_size": 4096, 00:31:08.412 "enable_recv_pipe": true, 00:31:08.412 "enable_quickack": false, 00:31:08.412 "enable_placement_id": 0, 00:31:08.412 "enable_zerocopy_send_server": true, 00:31:08.412 "enable_zerocopy_send_client": false, 00:31:08.412 "zerocopy_threshold": 0, 00:31:08.412 "tls_version": 0, 00:31:08.412 "enable_ktls": false 00:31:08.412 } 00:31:08.412 }, 00:31:08.412 { 00:31:08.412 "method": "sock_impl_set_options", 00:31:08.412 "params": { 00:31:08.412 "impl_name": "posix", 00:31:08.412 "recv_buf_size": 2097152, 00:31:08.412 "send_buf_size": 2097152, 00:31:08.412 "enable_recv_pipe": true, 00:31:08.412 "enable_quickack": false, 00:31:08.412 "enable_placement_id": 0, 00:31:08.412 "enable_zerocopy_send_server": true, 00:31:08.412 "enable_zerocopy_send_client": false, 00:31:08.412 "zerocopy_threshold": 0, 00:31:08.412 "tls_version": 0, 00:31:08.412 "enable_ktls": false 00:31:08.412 } 00:31:08.412 }, 00:31:08.412 { 00:31:08.412 "method": "sock_impl_set_options", 00:31:08.412 "params": { 00:31:08.412 "impl_name": "uring", 00:31:08.412 "recv_buf_size": 2097152, 00:31:08.412 "send_buf_size": 2097152, 00:31:08.412 "enable_recv_pipe": true, 00:31:08.412 "enable_quickack": false, 00:31:08.412 "enable_placement_id": 0, 00:31:08.412 "enable_zerocopy_send_server": false, 00:31:08.412 "enable_zerocopy_send_client": false, 00:31:08.412 "zerocopy_threshold": 0, 00:31:08.412 "tls_version": 0, 00:31:08.412 "enable_ktls": false 00:31:08.412 } 00:31:08.412 } 00:31:08.412 ] 00:31:08.412 }, 00:31:08.412 { 00:31:08.412 "subsystem": "vmd", 00:31:08.412 "config": [] 00:31:08.412 }, 00:31:08.412 { 00:31:08.412 "subsystem": "accel", 00:31:08.412 "config": [ 00:31:08.412 { 00:31:08.412 "method": "accel_set_options", 00:31:08.412 "params": { 00:31:08.412 "small_cache_size": 128, 00:31:08.412 "large_cache_size": 16, 00:31:08.412 "task_count": 2048, 00:31:08.412 "sequence_count": 2048, 00:31:08.412 "buf_count": 2048 00:31:08.412 } 00:31:08.412 } 00:31:08.412 ] 00:31:08.412 }, 00:31:08.412 { 00:31:08.412 "subsystem": "bdev", 00:31:08.412 "config": [ 00:31:08.412 { 00:31:08.412 "method": "bdev_set_options", 00:31:08.412 "params": { 00:31:08.412 "bdev_io_pool_size": 65535, 00:31:08.412 "bdev_io_cache_size": 256, 00:31:08.412 "bdev_auto_examine": true, 00:31:08.412 "iobuf_small_cache_size": 128, 00:31:08.412 "iobuf_large_cache_size": 16 00:31:08.412 } 00:31:08.412 }, 00:31:08.412 { 00:31:08.412 "method": "bdev_raid_set_options", 00:31:08.412 "params": { 00:31:08.412 "process_window_size_kb": 1024, 00:31:08.412 "process_max_bandwidth_mb_sec": 0 00:31:08.412 } 00:31:08.412 }, 00:31:08.412 { 00:31:08.412 "method": "bdev_iscsi_set_options", 00:31:08.412 "params": { 00:31:08.412 "timeout_sec": 30 00:31:08.412 } 00:31:08.412 }, 00:31:08.412 { 00:31:08.412 "method": "bdev_nvme_set_options", 00:31:08.412 "params": { 00:31:08.412 "action_on_timeout": "none", 00:31:08.412 "timeout_us": 0, 00:31:08.412 "timeout_admin_us": 0, 00:31:08.412 "keep_alive_timeout_ms": 10000, 00:31:08.412 "arbitration_burst": 0, 00:31:08.412 "low_priority_weight": 0, 00:31:08.412 "medium_priority_weight": 0, 00:31:08.412 "high_priority_weight": 0, 00:31:08.412 "nvme_adminq_poll_period_us": 10000, 00:31:08.412 "nvme_ioq_poll_period_us": 0, 00:31:08.412 "io_queue_requests": 512, 00:31:08.412 "delay_cmd_submit": true, 00:31:08.412 "transport_retry_count": 4, 00:31:08.412 "bdev_retry_count": 3, 00:31:08.412 "transport_ack_timeout": 0, 00:31:08.412 "ctrlr_loss_timeout_sec": 0, 00:31:08.412 "reconnect_delay_sec": 0, 00:31:08.412 "fast_io_fail_timeout_sec": 0, 00:31:08.412 "disable_auto_failback": false, 00:31:08.412 "generate_uuids": false, 00:31:08.412 "transport_tos": 0, 00:31:08.412 "nvme_error_stat": false, 00:31:08.412 "rdma_srq_size": 0, 00:31:08.413 "io_path_stat": false, 00:31:08.413 "allow_accel_sequence": false, 00:31:08.413 "rdma_max_cq_size": 0, 00:31:08.413 "rdma_cm_event_timeout_ms": 0, 00:31:08.413 "dhchap_digests": [ 00:31:08.413 "sha256", 00:31:08.413 "sha384", 00:31:08.413 "sha512" 00:31:08.413 ], 00:31:08.413 "dhchap_dhgroups": [ 00:31:08.413 "null", 00:31:08.413 "ffdhe2048", 00:31:08.413 "ffdhe3072", 00:31:08.413 "ffdhe4096", 00:31:08.413 "ffdhe6144", 00:31:08.413 "ffdhe8192" 00:31:08.413 ] 00:31:08.413 } 00:31:08.413 }, 00:31:08.413 { 00:31:08.413 "method": "bdev_nvme_attach_controller", 00:31:08.413 "params": { 00:31:08.413 "name": "nvme0", 00:31:08.413 "trtype": "TCP", 00:31:08.413 "adrfam": "IPv4", 00:31:08.413 "traddr": "127.0.0.1", 00:31:08.413 "trsvcid": "4420", 00:31:08.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:08.413 "prchk_reftag": false, 00:31:08.413 "prchk_guard": false, 00:31:08.413 "ctrlr_loss_timeout_sec": 0, 00:31:08.413 "reconnect_delay_sec": 0, 00:31:08.413 "fast_io_fail_timeout_sec": 0, 00:31:08.413 "psk": "key0", 00:31:08.413 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:08.413 "hdgst": false, 00:31:08.413 "ddgst": false, 00:31:08.413 "multipath": "multipath" 00:31:08.413 } 00:31:08.413 }, 00:31:08.413 { 00:31:08.413 "method": "bdev_nvme_set_hotplug", 00:31:08.413 "params": { 00:31:08.413 "period_us": 100000, 00:31:08.413 "enable": false 00:31:08.413 } 00:31:08.413 }, 00:31:08.413 { 00:31:08.413 "method": "bdev_wait_for_examine" 00:31:08.413 } 00:31:08.413 ] 00:31:08.413 }, 00:31:08.413 { 00:31:08.413 "subsystem": "nbd", 00:31:08.413 "config": [] 00:31:08.413 } 00:31:08.413 ] 00:31:08.413 }' 00:31:08.413 11:34:14 keyring_file -- keyring/file.sh@115 -- # killprocess 92659 00:31:08.413 11:34:14 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 92659 ']' 00:31:08.413 11:34:14 keyring_file -- common/autotest_common.sh@958 -- # kill -0 92659 00:31:08.413 11:34:14 keyring_file -- common/autotest_common.sh@959 -- # uname 00:31:08.413 11:34:14 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:08.413 11:34:14 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92659 00:31:08.413 killing process with pid 92659 00:31:08.413 Received shutdown signal, test time was about 1.000000 seconds 00:31:08.413 00:31:08.413 Latency(us) 00:31:08.413 [2024-12-10T11:34:15.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.413 [2024-12-10T11:34:15.239Z] =================================================================================================================== 00:31:08.413 [2024-12-10T11:34:15.239Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:08.413 11:34:15 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:08.413 11:34:15 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:08.413 11:34:15 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92659' 00:31:08.413 11:34:15 keyring_file -- common/autotest_common.sh@973 -- # kill 92659 00:31:08.413 11:34:15 keyring_file -- common/autotest_common.sh@978 -- # wait 92659 00:31:09.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:09.425 11:34:16 keyring_file -- keyring/file.sh@118 -- # bperfpid=92934 00:31:09.425 11:34:16 keyring_file -- keyring/file.sh@120 -- # waitforlisten 92934 /var/tmp/bperf.sock 00:31:09.425 11:34:16 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:09.425 11:34:16 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 92934 ']' 00:31:09.425 11:34:16 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:09.425 11:34:16 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:09.425 11:34:16 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:09.425 11:34:16 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:31:09.425 "subsystems": [ 00:31:09.425 { 00:31:09.425 "subsystem": "keyring", 00:31:09.425 "config": [ 00:31:09.425 { 00:31:09.425 "method": "keyring_file_add_key", 00:31:09.425 "params": { 00:31:09.425 "name": "key0", 00:31:09.425 "path": "/tmp/tmp.1gzCm9aXGB" 00:31:09.425 } 00:31:09.425 }, 00:31:09.425 { 00:31:09.425 "method": "keyring_file_add_key", 00:31:09.425 "params": { 00:31:09.425 "name": "key1", 00:31:09.425 "path": "/tmp/tmp.iqgzTqEXF4" 00:31:09.425 } 00:31:09.425 } 00:31:09.425 ] 00:31:09.425 }, 00:31:09.425 { 00:31:09.425 "subsystem": "iobuf", 00:31:09.425 "config": [ 00:31:09.425 { 00:31:09.425 "method": "iobuf_set_options", 00:31:09.425 "params": { 00:31:09.425 "small_pool_count": 8192, 00:31:09.425 "large_pool_count": 1024, 00:31:09.425 "small_bufsize": 8192, 00:31:09.425 "large_bufsize": 135168, 00:31:09.425 "enable_numa": false 00:31:09.425 } 00:31:09.425 } 00:31:09.425 ] 00:31:09.425 }, 00:31:09.425 { 00:31:09.425 "subsystem": "sock", 00:31:09.425 "config": [ 00:31:09.425 { 00:31:09.425 "method": "sock_set_default_impl", 00:31:09.425 "params": { 00:31:09.425 "impl_name": "uring" 00:31:09.425 } 00:31:09.425 }, 00:31:09.425 { 00:31:09.425 "method": "sock_impl_set_options", 00:31:09.425 "params": { 00:31:09.425 "impl_name": "ssl", 00:31:09.425 "recv_buf_size": 4096, 00:31:09.425 "send_buf_size": 4096, 00:31:09.425 "enable_recv_pipe": true, 00:31:09.425 "enable_quickack": false, 00:31:09.425 "enable_placement_id": 0, 00:31:09.425 "enable_zerocopy_send_server": true, 00:31:09.425 "enable_zerocopy_send_client": false, 00:31:09.425 "zerocopy_threshold": 0, 00:31:09.425 "tls_version": 0, 00:31:09.425 "enable_ktls": false 00:31:09.425 } 00:31:09.425 }, 00:31:09.425 { 00:31:09.425 "method": "sock_impl_set_options", 00:31:09.425 "params": { 00:31:09.425 "impl_name": "posix", 00:31:09.425 "recv_buf_size": 2097152, 00:31:09.425 "send_buf_size": 2097152, 00:31:09.425 "enable_recv_pipe": true, 00:31:09.425 "enable_quickack": false, 00:31:09.425 "enable_placement_id": 0, 00:31:09.425 "enable_zerocopy_send_server": true, 00:31:09.425 "enable_zerocopy_send_client": false, 00:31:09.425 "zerocopy_threshold": 0, 00:31:09.425 "tls_version": 0, 00:31:09.425 "enable_ktls": false 00:31:09.425 } 00:31:09.425 }, 00:31:09.425 { 00:31:09.425 "method": "sock_impl_set_options", 00:31:09.425 "params": { 00:31:09.425 "impl_name": "uring", 00:31:09.425 "recv_buf_size": 2097152, 00:31:09.425 "send_buf_size": 2097152, 00:31:09.425 "enable_recv_pipe": true, 00:31:09.425 "enable_quickack": false, 00:31:09.425 "enable_placement_id": 0, 00:31:09.425 "enable_zerocopy_send_server": false, 00:31:09.425 "enable_zerocopy_send_client": false, 00:31:09.425 "zerocopy_threshold": 0, 00:31:09.425 "tls_version": 0, 00:31:09.425 "enable_ktls": false 00:31:09.425 } 00:31:09.425 } 00:31:09.425 ] 00:31:09.425 }, 00:31:09.425 { 00:31:09.425 "subsystem": "vmd", 00:31:09.425 "config": [] 00:31:09.425 }, 00:31:09.425 { 00:31:09.425 "subsystem": "accel", 00:31:09.425 "config": [ 00:31:09.425 { 00:31:09.425 "method": "accel_set_options", 00:31:09.425 "params": { 00:31:09.425 "small_cache_size": 128, 00:31:09.425 "large_cache_size": 16, 00:31:09.425 "task_count": 2048, 00:31:09.425 "sequence_count": 2048, 00:31:09.425 "buf_count": 2048 00:31:09.425 } 00:31:09.425 } 00:31:09.425 ] 00:31:09.425 }, 00:31:09.425 { 00:31:09.425 "subsystem": "bdev", 00:31:09.425 "config": [ 00:31:09.425 { 00:31:09.425 "method": "bdev_set_options", 00:31:09.425 "params": { 00:31:09.425 "bdev_io_pool_size": 65535, 00:31:09.425 "bdev_io_cache_size": 256, 00:31:09.425 "bdev_auto_examine": true, 00:31:09.425 "iobuf_small_cache_size": 128, 00:31:09.425 "iobuf_large_cache_size": 16 00:31:09.425 } 00:31:09.425 }, 00:31:09.425 { 00:31:09.425 "method": "bdev_raid_set_options", 00:31:09.425 "params": { 00:31:09.425 "process_window_size_kb": 1024, 00:31:09.425 "process_max_bandwidth_mb_sec": 0 00:31:09.425 } 00:31:09.425 }, 00:31:09.425 { 00:31:09.425 "method": "bdev_iscsi_set_options", 00:31:09.425 "params": { 00:31:09.425 "timeout_sec": 30 00:31:09.425 } 00:31:09.425 }, 00:31:09.425 { 00:31:09.425 "method": "bdev_nvme_set_options", 00:31:09.425 "params": { 00:31:09.425 "action_on_timeout": "none", 00:31:09.425 "timeout_us": 0, 00:31:09.425 "timeout_admin_us": 0, 00:31:09.425 "keep_alive_timeout_ms": 10000, 00:31:09.425 "arbitration_burst": 0, 00:31:09.425 "low_priority_weight": 0, 00:31:09.425 "medium_priority_weight": 0, 00:31:09.425 "high_priority_weight": 0, 00:31:09.425 "nvme_adminq_poll_period_us": 10000, 00:31:09.425 "nvme_ioq_poll_period_us": 0, 00:31:09.425 "io_queue_requests": 512, 00:31:09.425 "delay_cmd_submit": true, 00:31:09.425 "transport_retry_count": 4, 00:31:09.425 "bdev_retry_count": 3, 00:31:09.425 "transport_ack_timeout": 0, 00:31:09.425 "ctrlr_loss_timeout_sec": 0, 00:31:09.425 "reconnect_delay_sec": 0, 00:31:09.425 "fast_io_fail_timeout_sec": 0, 00:31:09.425 "disable_auto_failback": false, 00:31:09.425 "generate_uuids": false, 00:31:09.425 "transport_tos": 0, 00:31:09.425 "nvme_error_stat": false, 00:31:09.425 "rdma_srq_size": 0, 00:31:09.425 "io_path_stat": false, 00:31:09.425 "allow_accel_sequence": false, 00:31:09.425 "rdma_max_cq_size": 0, 00:31:09.425 "rdma_cm_event_timeout_ms": 0, 00:31:09.425 "dhchap_digests": [ 00:31:09.425 "sha256", 00:31:09.425 "sha384", 00:31:09.425 "sha512" 00:31:09.425 ], 00:31:09.426 "dhchap_dhgroups": [ 00:31:09.426 "null", 00:31:09.426 "ffdhe2048", 00:31:09.426 "ffdhe3072", 00:31:09.426 "ffdhe4096", 00:31:09.426 "ffdhe6144", 00:31:09.426 "ffdhe8192" 00:31:09.426 ] 00:31:09.426 } 00:31:09.426 }, 00:31:09.426 { 00:31:09.426 "method": "bdev_nvme_attach_controller", 00:31:09.426 "params": { 00:31:09.426 "name": "nvme0", 00:31:09.426 "trtype": "TCP", 00:31:09.426 "adrfam": "IPv4", 00:31:09.426 "traddr": "127.0.0.1", 00:31:09.426 "trsvcid": "4420", 00:31:09.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:09.426 "prchk_reftag": false, 00:31:09.426 "prchk_guard": false, 00:31:09.426 "ctrlr_loss_timeout_sec": 0, 00:31:09.426 "reconnect_delay_sec": 0, 00:31:09.426 "fast_io_fail_timeout_sec": 0, 00:31:09.426 "psk": "key0", 00:31:09.426 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:09.426 "hdgst": false, 00:31:09.426 "ddgst": false, 00:31:09.426 "multipath": "multipath" 00:31:09.426 } 00:31:09.426 }, 00:31:09.426 { 00:31:09.426 "method": "bdev_nvme_set_hotplug", 00:31:09.426 "params": { 00:31:09.426 "period_us": 100000, 00:31:09.426 "enable": false 00:31:09.426 } 00:31:09.426 }, 00:31:09.426 { 00:31:09.426 "method": "bdev_wait_for_examine" 00:31:09.426 } 00:31:09.426 ] 00:31:09.426 }, 00:31:09.426 { 00:31:09.426 "subsystem": "nbd", 00:31:09.426 "config": [] 00:31:09.426 } 00:31:09.426 ] 00:31:09.426 }' 00:31:09.426 11:34:16 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:09.426 11:34:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:09.426 [2024-12-10 11:34:16.132072] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:31:09.426 [2024-12-10 11:34:16.132277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92934 ] 00:31:09.684 [2024-12-10 11:34:16.319908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.684 [2024-12-10 11:34:16.450914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.943 [2024-12-10 11:34:16.722303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:10.202 [2024-12-10 11:34:16.897727] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:10.461 11:34:17 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:10.461 11:34:17 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:31:10.461 11:34:17 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:31:10.461 11:34:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:10.461 11:34:17 keyring_file -- keyring/file.sh@121 -- # jq length 00:31:10.719 11:34:17 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:10.719 11:34:17 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:31:10.719 11:34:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:10.719 11:34:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:10.719 11:34:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:10.719 11:34:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:10.719 11:34:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:10.977 11:34:17 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:31:10.978 11:34:17 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:31:10.978 11:34:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:10.978 11:34:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:10.978 11:34:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:10.978 11:34:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:10.978 11:34:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:11.545 11:34:18 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:31:11.545 11:34:18 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:31:11.545 11:34:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:11.545 11:34:18 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:31:11.545 11:34:18 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:31:11.545 11:34:18 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:11.545 11:34:18 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.1gzCm9aXGB /tmp/tmp.iqgzTqEXF4 00:31:11.545 11:34:18 keyring_file -- keyring/file.sh@20 -- # killprocess 92934 00:31:11.545 11:34:18 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 92934 ']' 00:31:11.545 11:34:18 keyring_file -- common/autotest_common.sh@958 -- # kill -0 92934 00:31:11.545 11:34:18 keyring_file -- common/autotest_common.sh@959 -- # uname 00:31:11.545 11:34:18 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:11.545 11:34:18 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92934 00:31:11.803 killing process with pid 92934 00:31:11.803 Received shutdown signal, test time was about 1.000000 seconds 00:31:11.803 00:31:11.803 Latency(us) 00:31:11.803 [2024-12-10T11:34:18.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.803 [2024-12-10T11:34:18.629Z] =================================================================================================================== 00:31:11.803 [2024-12-10T11:34:18.629Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:11.803 11:34:18 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:11.803 11:34:18 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:11.803 11:34:18 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92934' 00:31:11.803 11:34:18 keyring_file -- common/autotest_common.sh@973 -- # kill 92934 00:31:11.803 11:34:18 keyring_file -- common/autotest_common.sh@978 -- # wait 92934 00:31:12.737 11:34:19 keyring_file -- keyring/file.sh@21 -- # killprocess 92637 00:31:12.737 11:34:19 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 92637 ']' 00:31:12.737 11:34:19 keyring_file -- common/autotest_common.sh@958 -- # kill -0 92637 00:31:12.737 11:34:19 keyring_file -- common/autotest_common.sh@959 -- # uname 00:31:12.737 11:34:19 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:12.737 11:34:19 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92637 00:31:12.737 killing process with pid 92637 00:31:12.737 11:34:19 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:12.737 11:34:19 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:12.737 11:34:19 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92637' 00:31:12.737 11:34:19 keyring_file -- common/autotest_common.sh@973 -- # kill 92637 00:31:12.737 11:34:19 keyring_file -- common/autotest_common.sh@978 -- # wait 92637 00:31:15.272 00:31:15.272 real 0m21.196s 00:31:15.272 user 0m49.896s 00:31:15.272 sys 0m3.125s 00:31:15.272 11:34:21 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.272 11:34:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:15.272 ************************************ 00:31:15.272 END TEST keyring_file 00:31:15.272 ************************************ 00:31:15.272 11:34:21 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:31:15.272 11:34:21 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:15.272 11:34:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:15.272 11:34:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:15.272 11:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:15.272 ************************************ 00:31:15.272 START TEST keyring_linux 00:31:15.272 ************************************ 00:31:15.272 11:34:21 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:15.272 Joined session keyring: 703660252 00:31:15.272 * Looking for test storage... 00:31:15.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:15.272 11:34:21 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:15.272 11:34:21 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:31:15.272 11:34:21 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:15.272 11:34:21 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@345 -- # : 1 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.272 11:34:21 keyring_linux -- scripts/common.sh@368 -- # return 0 00:31:15.272 11:34:21 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.272 11:34:21 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:15.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.272 --rc genhtml_branch_coverage=1 00:31:15.272 --rc genhtml_function_coverage=1 00:31:15.272 --rc genhtml_legend=1 00:31:15.272 --rc geninfo_all_blocks=1 00:31:15.272 --rc geninfo_unexecuted_blocks=1 00:31:15.272 00:31:15.272 ' 00:31:15.272 11:34:21 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:15.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.272 --rc genhtml_branch_coverage=1 00:31:15.272 --rc genhtml_function_coverage=1 00:31:15.272 --rc genhtml_legend=1 00:31:15.272 --rc geninfo_all_blocks=1 00:31:15.272 --rc geninfo_unexecuted_blocks=1 00:31:15.272 00:31:15.272 ' 00:31:15.273 11:34:21 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:15.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.273 --rc genhtml_branch_coverage=1 00:31:15.273 --rc genhtml_function_coverage=1 00:31:15.273 --rc genhtml_legend=1 00:31:15.273 --rc geninfo_all_blocks=1 00:31:15.273 --rc geninfo_unexecuted_blocks=1 00:31:15.273 00:31:15.273 ' 00:31:15.273 11:34:21 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:15.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.273 --rc genhtml_branch_coverage=1 00:31:15.273 --rc genhtml_function_coverage=1 00:31:15.273 --rc genhtml_legend=1 00:31:15.273 --rc geninfo_all_blocks=1 00:31:15.273 --rc geninfo_unexecuted_blocks=1 00:31:15.273 00:31:15.273 ' 00:31:15.273 11:34:21 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=20cf3ff5-7c8b-4175-aa20-a641780c6f81 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:15.273 11:34:21 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.273 11:34:21 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.273 11:34:21 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.273 11:34:21 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.273 11:34:21 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.273 11:34:21 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.273 11:34:21 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.273 11:34:21 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:15.273 11:34:21 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:15.273 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:15.273 11:34:21 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:15.273 11:34:21 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:15.273 11:34:21 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:15.273 11:34:21 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:15.273 11:34:21 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:15.273 11:34:21 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@733 -- # python - 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:15.273 /tmp/:spdk-test:key0 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:15.273 11:34:21 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:31:15.273 11:34:21 keyring_linux -- nvmf/common.sh@733 -- # python - 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:15.273 /tmp/:spdk-test:key1 00:31:15.273 11:34:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:15.273 11:34:21 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=93115 00:31:15.273 11:34:21 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:15.273 11:34:21 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 93115 00:31:15.273 11:34:21 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 93115 ']' 00:31:15.273 11:34:21 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.273 11:34:21 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:15.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.273 11:34:21 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.273 11:34:21 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:15.273 11:34:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:15.273 [2024-12-10 11:34:22.034410] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:31:15.273 [2024-12-10 11:34:22.034580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93115 ] 00:31:15.533 [2024-12-10 11:34:22.218952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.533 [2024-12-10 11:34:22.344767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.792 [2024-12-10 11:34:22.590723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:16.360 11:34:23 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:16.360 11:34:23 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:31:16.360 11:34:23 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:16.360 11:34:23 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.360 11:34:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:16.360 [2024-12-10 11:34:23.135174] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.360 null0 00:31:16.360 [2024-12-10 11:34:23.167192] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:16.360 [2024-12-10 11:34:23.167516] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:16.619 11:34:23 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.619 11:34:23 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:16.619 226472237 00:31:16.619 11:34:23 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:16.619 705399902 00:31:16.619 11:34:23 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=93133 00:31:16.619 11:34:23 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:16.619 11:34:23 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 93133 /var/tmp/bperf.sock 00:31:16.619 11:34:23 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 93133 ']' 00:31:16.619 11:34:23 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:16.619 11:34:23 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:16.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:16.619 11:34:23 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:16.619 11:34:23 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:16.619 11:34:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:16.619 [2024-12-10 11:34:23.304654] Starting SPDK v25.01-pre git sha1 52a413487 / DPDK 24.03.0 initialization... 00:31:16.619 [2024-12-10 11:34:23.304822] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93133 ] 00:31:16.878 [2024-12-10 11:34:23.489299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.878 [2024-12-10 11:34:23.615565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.844 11:34:24 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:17.844 11:34:24 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:31:17.844 11:34:24 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:17.844 11:34:24 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:17.844 11:34:24 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:17.844 11:34:24 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:18.412 [2024-12-10 11:34:25.124701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:18.670 11:34:25 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:18.670 11:34:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:18.928 [2024-12-10 11:34:25.556970] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:18.928 nvme0n1 00:31:18.928 11:34:25 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:18.928 11:34:25 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:18.928 11:34:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:18.928 11:34:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:18.928 11:34:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:18.928 11:34:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.187 11:34:25 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:19.187 11:34:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:19.187 11:34:25 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:19.187 11:34:25 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:19.187 11:34:25 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:19.187 11:34:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.187 11:34:25 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:19.445 11:34:26 keyring_linux -- keyring/linux.sh@25 -- # sn=226472237 00:31:19.445 11:34:26 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:19.445 11:34:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:19.445 11:34:26 keyring_linux -- keyring/linux.sh@26 -- # [[ 226472237 == \2\2\6\4\7\2\2\3\7 ]] 00:31:19.445 11:34:26 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 226472237 00:31:19.445 11:34:26 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:19.445 11:34:26 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:19.704 Running I/O for 1 seconds... 00:31:20.641 8356.00 IOPS, 32.64 MiB/s 00:31:20.641 Latency(us) 00:31:20.641 [2024-12-10T11:34:27.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:20.641 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:20.641 nvme0n1 : 1.01 8369.96 32.70 0.00 0.00 15176.32 5391.83 22282.24 00:31:20.641 [2024-12-10T11:34:27.467Z] =================================================================================================================== 00:31:20.641 [2024-12-10T11:34:27.467Z] Total : 8369.96 32.70 0.00 0.00 15176.32 5391.83 22282.24 00:31:20.641 { 00:31:20.641 "results": [ 00:31:20.641 { 00:31:20.641 "job": "nvme0n1", 00:31:20.641 "core_mask": "0x2", 00:31:20.641 "workload": "randread", 00:31:20.641 "status": "finished", 00:31:20.641 "queue_depth": 128, 00:31:20.641 "io_size": 4096, 00:31:20.641 "runtime": 1.013744, 00:31:20.641 "iops": 8369.963225429694, 00:31:20.641 "mibps": 32.69516884933474, 00:31:20.641 "io_failed": 0, 00:31:20.641 "io_timeout": 0, 00:31:20.641 "avg_latency_us": 15176.320061713186, 00:31:20.641 "min_latency_us": 5391.825454545455, 00:31:20.641 "max_latency_us": 22282.24 00:31:20.641 } 00:31:20.641 ], 00:31:20.641 "core_count": 1 00:31:20.641 } 00:31:20.641 11:34:27 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:20.641 11:34:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:20.900 11:34:27 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:20.900 11:34:27 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:20.900 11:34:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:20.900 11:34:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:20.900 11:34:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:20.900 11:34:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:21.467 11:34:28 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:21.467 11:34:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:21.467 11:34:28 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:21.467 11:34:28 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:21.467 11:34:28 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:31:21.468 11:34:28 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:21.468 11:34:28 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:31:21.468 11:34:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:21.468 11:34:28 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:31:21.468 11:34:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:21.468 11:34:28 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:21.468 11:34:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:21.726 [2024-12-10 11:34:28.356582] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:21.726 [2024-12-10 11:34:28.356670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:31:21.726 [2024-12-10 11:34:28.357634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:31:21.726 [2024-12-10 11:34:28.358624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:31:21.726 [2024-12-10 11:34:28.358675] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:21.726 [2024-12-10 11:34:28.358694] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:31:21.726 [2024-12-10 11:34:28.358711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:31:21.726 request: 00:31:21.727 { 00:31:21.727 "name": "nvme0", 00:31:21.727 "trtype": "tcp", 00:31:21.727 "traddr": "127.0.0.1", 00:31:21.727 "adrfam": "ipv4", 00:31:21.727 "trsvcid": "4420", 00:31:21.727 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:21.727 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:21.727 "prchk_reftag": false, 00:31:21.727 "prchk_guard": false, 00:31:21.727 "hdgst": false, 00:31:21.727 "ddgst": false, 00:31:21.727 "psk": ":spdk-test:key1", 00:31:21.727 "allow_unrecognized_csi": false, 00:31:21.727 "method": "bdev_nvme_attach_controller", 00:31:21.727 "req_id": 1 00:31:21.727 } 00:31:21.727 Got JSON-RPC error response 00:31:21.727 response: 00:31:21.727 { 00:31:21.727 "code": -5, 00:31:21.727 "message": "Input/output error" 00:31:21.727 } 00:31:21.727 11:34:28 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:31:21.727 11:34:28 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:21.727 11:34:28 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:21.727 11:34:28 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@33 -- # sn=226472237 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 226472237 00:31:21.727 1 links removed 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@33 -- # sn=705399902 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 705399902 00:31:21.727 1 links removed 00:31:21.727 11:34:28 keyring_linux -- keyring/linux.sh@41 -- # killprocess 93133 00:31:21.727 11:34:28 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 93133 ']' 00:31:21.727 11:34:28 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 93133 00:31:21.727 11:34:28 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:31:21.727 11:34:28 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:21.727 11:34:28 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93133 00:31:21.727 11:34:28 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:21.727 11:34:28 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:21.727 killing process with pid 93133 00:31:21.727 11:34:28 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93133' 00:31:21.727 11:34:28 keyring_linux -- common/autotest_common.sh@973 -- # kill 93133 00:31:21.727 Received shutdown signal, test time was about 1.000000 seconds 00:31:21.727 00:31:21.727 Latency(us) 00:31:21.727 [2024-12-10T11:34:28.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.727 [2024-12-10T11:34:28.553Z] =================================================================================================================== 00:31:21.727 [2024-12-10T11:34:28.553Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:21.727 11:34:28 keyring_linux -- common/autotest_common.sh@978 -- # wait 93133 00:31:22.662 11:34:29 keyring_linux -- keyring/linux.sh@42 -- # killprocess 93115 00:31:22.662 11:34:29 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 93115 ']' 00:31:22.662 11:34:29 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 93115 00:31:22.662 11:34:29 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:31:22.662 11:34:29 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:22.662 11:34:29 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93115 00:31:22.921 11:34:29 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:22.921 11:34:29 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:22.921 killing process with pid 93115 00:31:22.921 11:34:29 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93115' 00:31:22.921 11:34:29 keyring_linux -- common/autotest_common.sh@973 -- # kill 93115 00:31:22.921 11:34:29 keyring_linux -- common/autotest_common.sh@978 -- # wait 93115 00:31:25.452 00:31:25.452 real 0m10.238s 00:31:25.452 user 0m18.449s 00:31:25.452 sys 0m1.666s 00:31:25.452 11:34:31 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:25.452 11:34:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:25.452 ************************************ 00:31:25.452 END TEST keyring_linux 00:31:25.452 ************************************ 00:31:25.452 11:34:31 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:25.452 11:34:31 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:31:25.452 11:34:31 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:31:25.452 11:34:31 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:31:25.452 11:34:31 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:31:25.452 11:34:31 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:31:25.452 11:34:31 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:25.452 11:34:31 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:25.452 11:34:31 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:25.452 11:34:31 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:25.452 11:34:31 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:25.452 11:34:31 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:25.452 11:34:31 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:25.452 11:34:31 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:25.452 11:34:31 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:25.452 11:34:31 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:25.452 11:34:31 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:25.452 11:34:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:25.452 11:34:31 -- common/autotest_common.sh@10 -- # set +x 00:31:25.452 11:34:31 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:25.452 11:34:31 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:25.452 11:34:31 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:25.452 11:34:31 -- common/autotest_common.sh@10 -- # set +x 00:31:27.356 INFO: APP EXITING 00:31:27.356 INFO: killing all VMs 00:31:27.356 INFO: killing vhost app 00:31:27.356 INFO: EXIT DONE 00:31:27.615 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:27.615 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:27.874 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:28.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:28.445 Cleaning 00:31:28.445 Removing: /var/run/dpdk/spdk0/config 00:31:28.445 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:28.445 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:28.445 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:28.445 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:28.445 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:28.445 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:28.445 Removing: /var/run/dpdk/spdk1/config 00:31:28.445 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:28.445 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:28.445 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:28.445 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:28.445 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:28.445 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:28.445 Removing: /var/run/dpdk/spdk2/config 00:31:28.445 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:28.445 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:28.445 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:28.445 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:28.445 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:28.445 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:28.445 Removing: /var/run/dpdk/spdk3/config 00:31:28.445 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:28.445 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:28.445 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:28.445 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:28.445 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:28.445 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:28.446 Removing: /var/run/dpdk/spdk4/config 00:31:28.446 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:28.446 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:28.446 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:28.446 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:28.446 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:28.446 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:28.446 Removing: /dev/shm/nvmf_trace.0 00:31:28.446 Removing: /dev/shm/spdk_tgt_trace.pid57483 00:31:28.446 Removing: /var/run/dpdk/spdk0 00:31:28.446 Removing: /var/run/dpdk/spdk1 00:31:28.446 Removing: /var/run/dpdk/spdk2 00:31:28.446 Removing: /var/run/dpdk/spdk3 00:31:28.446 Removing: /var/run/dpdk/spdk4 00:31:28.705 Removing: /var/run/dpdk/spdk_pid57264 00:31:28.705 Removing: /var/run/dpdk/spdk_pid57483 00:31:28.705 Removing: /var/run/dpdk/spdk_pid57706 00:31:28.705 Removing: /var/run/dpdk/spdk_pid57810 00:31:28.705 Removing: /var/run/dpdk/spdk_pid57855 00:31:28.705 Removing: /var/run/dpdk/spdk_pid57983 00:31:28.705 Removing: /var/run/dpdk/spdk_pid58007 00:31:28.705 Removing: /var/run/dpdk/spdk_pid58166 00:31:28.705 Removing: /var/run/dpdk/spdk_pid58380 00:31:28.705 Removing: /var/run/dpdk/spdk_pid58541 00:31:28.705 Removing: /var/run/dpdk/spdk_pid58640 00:31:28.705 Removing: /var/run/dpdk/spdk_pid58747 00:31:28.705 Removing: /var/run/dpdk/spdk_pid58869 00:31:28.705 Removing: /var/run/dpdk/spdk_pid58966 00:31:28.705 Removing: /var/run/dpdk/spdk_pid59011 00:31:28.705 Removing: /var/run/dpdk/spdk_pid59042 00:31:28.705 Removing: /var/run/dpdk/spdk_pid59118 00:31:28.705 Removing: /var/run/dpdk/spdk_pid59213 00:31:28.705 Removing: /var/run/dpdk/spdk_pid59682 00:31:28.705 Removing: /var/run/dpdk/spdk_pid59756 00:31:28.705 Removing: /var/run/dpdk/spdk_pid59830 00:31:28.705 Removing: /var/run/dpdk/spdk_pid59846 00:31:28.705 Removing: /var/run/dpdk/spdk_pid59991 00:31:28.705 Removing: /var/run/dpdk/spdk_pid60013 00:31:28.705 Removing: /var/run/dpdk/spdk_pid60143 00:31:28.705 Removing: /var/run/dpdk/spdk_pid60164 00:31:28.705 Removing: /var/run/dpdk/spdk_pid60230 00:31:28.705 Removing: /var/run/dpdk/spdk_pid60254 00:31:28.705 Removing: /var/run/dpdk/spdk_pid60318 00:31:28.705 Removing: /var/run/dpdk/spdk_pid60336 00:31:28.705 Removing: /var/run/dpdk/spdk_pid60531 00:31:28.705 Removing: /var/run/dpdk/spdk_pid60567 00:31:28.705 Removing: /var/run/dpdk/spdk_pid60651 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61008 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61032 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61075 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61106 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61134 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61170 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61196 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61229 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61260 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61291 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61318 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61355 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61380 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61408 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61439 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61470 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61503 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61534 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61560 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61587 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61635 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61661 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61702 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61792 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61832 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61854 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61900 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61927 00:31:28.705 Removing: /var/run/dpdk/spdk_pid61941 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62001 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62027 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62067 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62094 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62116 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62143 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62164 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62186 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62213 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62240 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62286 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62330 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62363 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62409 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62436 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62461 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62519 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62547 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62587 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62606 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62631 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62662 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62687 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62712 00:31:28.705 Removing: /var/run/dpdk/spdk_pid62737 00:31:28.963 Removing: /var/run/dpdk/spdk_pid62768 00:31:28.963 Removing: /var/run/dpdk/spdk_pid62862 00:31:28.963 Removing: /var/run/dpdk/spdk_pid62971 00:31:28.963 Removing: /var/run/dpdk/spdk_pid63140 00:31:28.963 Removing: /var/run/dpdk/spdk_pid63193 00:31:28.963 Removing: /var/run/dpdk/spdk_pid63248 00:31:28.963 Removing: /var/run/dpdk/spdk_pid63280 00:31:28.963 Removing: /var/run/dpdk/spdk_pid63309 00:31:28.963 Removing: /var/run/dpdk/spdk_pid63335 00:31:28.963 Removing: /var/run/dpdk/spdk_pid63386 00:31:28.963 Removing: /var/run/dpdk/spdk_pid63414 00:31:28.963 Removing: /var/run/dpdk/spdk_pid63504 00:31:28.963 Removing: /var/run/dpdk/spdk_pid63547 00:31:28.963 Removing: /var/run/dpdk/spdk_pid63632 00:31:28.963 Removing: /var/run/dpdk/spdk_pid63776 00:31:28.963 Removing: /var/run/dpdk/spdk_pid63882 00:31:28.963 Removing: /var/run/dpdk/spdk_pid63940 00:31:28.963 Removing: /var/run/dpdk/spdk_pid64074 00:31:28.963 Removing: /var/run/dpdk/spdk_pid64134 00:31:28.963 Removing: /var/run/dpdk/spdk_pid64185 00:31:28.963 Removing: /var/run/dpdk/spdk_pid64435 00:31:28.963 Removing: /var/run/dpdk/spdk_pid64553 00:31:28.963 Removing: /var/run/dpdk/spdk_pid64599 00:31:28.963 Removing: /var/run/dpdk/spdk_pid64630 00:31:28.963 Removing: /var/run/dpdk/spdk_pid64681 00:31:28.963 Removing: /var/run/dpdk/spdk_pid64725 00:31:28.963 Removing: /var/run/dpdk/spdk_pid64772 00:31:28.964 Removing: /var/run/dpdk/spdk_pid64821 00:31:28.964 Removing: /var/run/dpdk/spdk_pid65231 00:31:28.964 Removing: /var/run/dpdk/spdk_pid65270 00:31:28.964 Removing: /var/run/dpdk/spdk_pid65649 00:31:28.964 Removing: /var/run/dpdk/spdk_pid66138 00:31:28.964 Removing: /var/run/dpdk/spdk_pid66438 00:31:28.964 Removing: /var/run/dpdk/spdk_pid67383 00:31:28.964 Removing: /var/run/dpdk/spdk_pid68376 00:31:28.964 Removing: /var/run/dpdk/spdk_pid68505 00:31:28.964 Removing: /var/run/dpdk/spdk_pid68585 00:31:28.964 Removing: /var/run/dpdk/spdk_pid70068 00:31:28.964 Removing: /var/run/dpdk/spdk_pid70441 00:31:28.964 Removing: /var/run/dpdk/spdk_pid74363 00:31:28.964 Removing: /var/run/dpdk/spdk_pid74778 00:31:28.964 Removing: /var/run/dpdk/spdk_pid74889 00:31:28.964 Removing: /var/run/dpdk/spdk_pid75037 00:31:28.964 Removing: /var/run/dpdk/spdk_pid75078 00:31:28.964 Removing: /var/run/dpdk/spdk_pid75119 00:31:28.964 Removing: /var/run/dpdk/spdk_pid75158 00:31:28.964 Removing: /var/run/dpdk/spdk_pid75285 00:31:28.964 Removing: /var/run/dpdk/spdk_pid75428 00:31:28.964 Removing: /var/run/dpdk/spdk_pid75629 00:31:28.964 Removing: /var/run/dpdk/spdk_pid75735 00:31:28.964 Removing: /var/run/dpdk/spdk_pid75953 00:31:28.964 Removing: /var/run/dpdk/spdk_pid76056 00:31:28.964 Removing: /var/run/dpdk/spdk_pid76168 00:31:28.964 Removing: /var/run/dpdk/spdk_pid76550 00:31:28.964 Removing: /var/run/dpdk/spdk_pid76987 00:31:28.964 Removing: /var/run/dpdk/spdk_pid76988 00:31:28.964 Removing: /var/run/dpdk/spdk_pid76989 00:31:28.964 Removing: /var/run/dpdk/spdk_pid77272 00:31:28.964 Removing: /var/run/dpdk/spdk_pid77563 00:31:28.964 Removing: /var/run/dpdk/spdk_pid77570 00:31:28.964 Removing: /var/run/dpdk/spdk_pid79925 00:31:28.964 Removing: /var/run/dpdk/spdk_pid80352 00:31:28.964 Removing: /var/run/dpdk/spdk_pid80362 00:31:28.964 Removing: /var/run/dpdk/spdk_pid80707 00:31:28.964 Removing: /var/run/dpdk/spdk_pid80722 00:31:28.964 Removing: /var/run/dpdk/spdk_pid80742 00:31:28.964 Removing: /var/run/dpdk/spdk_pid80777 00:31:28.964 Removing: /var/run/dpdk/spdk_pid80789 00:31:28.964 Removing: /var/run/dpdk/spdk_pid80879 00:31:28.964 Removing: /var/run/dpdk/spdk_pid80893 00:31:28.964 Removing: /var/run/dpdk/spdk_pid80997 00:31:28.964 Removing: /var/run/dpdk/spdk_pid81004 00:31:28.964 Removing: /var/run/dpdk/spdk_pid81109 00:31:28.964 Removing: /var/run/dpdk/spdk_pid81118 00:31:28.964 Removing: /var/run/dpdk/spdk_pid81573 00:31:28.964 Removing: /var/run/dpdk/spdk_pid81613 00:31:28.964 Removing: /var/run/dpdk/spdk_pid81723 00:31:28.964 Removing: /var/run/dpdk/spdk_pid81800 00:31:28.964 Removing: /var/run/dpdk/spdk_pid82174 00:31:28.964 Removing: /var/run/dpdk/spdk_pid82378 00:31:28.964 Removing: /var/run/dpdk/spdk_pid82836 00:31:28.964 Removing: /var/run/dpdk/spdk_pid83411 00:31:28.964 Removing: /var/run/dpdk/spdk_pid84317 00:31:28.964 Removing: /var/run/dpdk/spdk_pid84976 00:31:28.964 Removing: /var/run/dpdk/spdk_pid84979 00:31:28.964 Removing: /var/run/dpdk/spdk_pid87038 00:31:28.964 Removing: /var/run/dpdk/spdk_pid87111 00:31:28.964 Removing: /var/run/dpdk/spdk_pid87179 00:31:28.964 Removing: /var/run/dpdk/spdk_pid87250 00:31:29.223 Removing: /var/run/dpdk/spdk_pid87392 00:31:29.223 Removing: /var/run/dpdk/spdk_pid87465 00:31:29.223 Removing: /var/run/dpdk/spdk_pid87531 00:31:29.223 Removing: /var/run/dpdk/spdk_pid87600 00:31:29.223 Removing: /var/run/dpdk/spdk_pid87992 00:31:29.223 Removing: /var/run/dpdk/spdk_pid89219 00:31:29.223 Removing: /var/run/dpdk/spdk_pid89367 00:31:29.223 Removing: /var/run/dpdk/spdk_pid89616 00:31:29.223 Removing: /var/run/dpdk/spdk_pid90227 00:31:29.223 Removing: /var/run/dpdk/spdk_pid90387 00:31:29.223 Removing: /var/run/dpdk/spdk_pid90553 00:31:29.223 Removing: /var/run/dpdk/spdk_pid90650 00:31:29.223 Removing: /var/run/dpdk/spdk_pid90817 00:31:29.223 Removing: /var/run/dpdk/spdk_pid90926 00:31:29.223 Removing: /var/run/dpdk/spdk_pid91661 00:31:29.223 Removing: /var/run/dpdk/spdk_pid91692 00:31:29.223 Removing: /var/run/dpdk/spdk_pid91734 00:31:29.223 Removing: /var/run/dpdk/spdk_pid92093 00:31:29.223 Removing: /var/run/dpdk/spdk_pid92124 00:31:29.223 Removing: /var/run/dpdk/spdk_pid92160 00:31:29.223 Removing: /var/run/dpdk/spdk_pid92637 00:31:29.223 Removing: /var/run/dpdk/spdk_pid92659 00:31:29.223 Removing: /var/run/dpdk/spdk_pid92934 00:31:29.223 Removing: /var/run/dpdk/spdk_pid93115 00:31:29.223 Removing: /var/run/dpdk/spdk_pid93133 00:31:29.223 Clean 00:31:29.223 11:34:35 -- common/autotest_common.sh@1453 -- # return 0 00:31:29.223 11:34:35 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:31:29.223 11:34:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:29.223 11:34:35 -- common/autotest_common.sh@10 -- # set +x 00:31:29.223 11:34:35 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:31:29.223 11:34:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:29.223 11:34:35 -- common/autotest_common.sh@10 -- # set +x 00:31:29.223 11:34:35 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:29.223 11:34:35 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:29.223 11:34:35 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:29.223 11:34:36 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:31:29.223 11:34:36 -- spdk/autotest.sh@398 -- # hostname 00:31:29.223 11:34:36 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:29.482 geninfo: WARNING: invalid characters removed from testname! 00:32:01.552 11:35:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:05.738 11:35:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:09.022 11:35:15 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:11.553 11:35:18 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:14.838 11:35:21 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:17.371 11:35:24 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:20.676 11:35:27 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:20.676 11:35:27 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:20.676 11:35:27 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:20.676 11:35:27 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:20.676 11:35:27 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:20.676 11:35:27 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:20.676 + [[ -n 5249 ]] 00:32:20.676 + sudo kill 5249 00:32:20.685 [Pipeline] } 00:32:20.700 [Pipeline] // timeout 00:32:20.705 [Pipeline] } 00:32:20.719 [Pipeline] // stage 00:32:20.724 [Pipeline] } 00:32:20.737 [Pipeline] // catchError 00:32:20.746 [Pipeline] stage 00:32:20.748 [Pipeline] { (Stop VM) 00:32:20.760 [Pipeline] sh 00:32:21.038 + vagrant halt 00:32:25.227 ==> default: Halting domain... 00:32:31.809 [Pipeline] sh 00:32:32.089 + vagrant destroy -f 00:32:36.278 ==> default: Removing domain... 00:32:36.291 [Pipeline] sh 00:32:36.571 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:32:36.580 [Pipeline] } 00:32:36.595 [Pipeline] // stage 00:32:36.600 [Pipeline] } 00:32:36.614 [Pipeline] // dir 00:32:36.619 [Pipeline] } 00:32:36.636 [Pipeline] // wrap 00:32:36.645 [Pipeline] } 00:32:36.659 [Pipeline] // catchError 00:32:36.669 [Pipeline] stage 00:32:36.671 [Pipeline] { (Epilogue) 00:32:36.683 [Pipeline] sh 00:32:36.965 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:45.091 [Pipeline] catchError 00:32:45.093 [Pipeline] { 00:32:45.105 [Pipeline] sh 00:32:45.386 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:45.644 Artifacts sizes are good 00:32:45.653 [Pipeline] } 00:32:45.667 [Pipeline] // catchError 00:32:45.678 [Pipeline] archiveArtifacts 00:32:45.685 Archiving artifacts 00:32:45.812 [Pipeline] cleanWs 00:32:45.823 [WS-CLEANUP] Deleting project workspace... 00:32:45.823 [WS-CLEANUP] Deferred wipeout is used... 00:32:45.830 [WS-CLEANUP] done 00:32:45.832 [Pipeline] } 00:32:45.847 [Pipeline] // stage 00:32:45.852 [Pipeline] } 00:32:45.866 [Pipeline] // node 00:32:45.871 [Pipeline] End of Pipeline 00:32:45.910 Finished: SUCCESS